AWS / amazon-s3

4 posts

aws

Announcing replication support and Intelligent-Tiering for Amazon S3 Tables (opens in new tab)

AWS has expanded the capabilities of Amazon S3 Tables by introducing Intelligent-Tiering for automated cost optimization and cross-region replication for enhanced data availability. These updates address the operational overhead of managing large-scale Apache Iceberg datasets by automating storage lifecycle management and simplifying the architecture required for global data distribution. By integrating these features, organizations can reduce storage costs without manual intervention while ensuring consistent data access across multiple AWS Regions and accounts. ### Cost Optimization with S3 Tables Intelligent-Tiering This feature automatically shifts data between storage tiers based on access frequency to maximize cost efficiency without impacting application performance. * The system utilizes three low-latency tiers: Frequent Access, Infrequent Access (offering 40% lower costs), and Archive Instant Access (offering 68% lower costs than Infrequent Access). * Data transitions are automated, moving to Infrequent Access after 30 days of inactivity and to Archive Instant Access after 90 days. * Automated table maintenance tasks, such as compaction and snapshot expiration, are optimized to skip colder files; for example, compaction only processes data in the Frequent Access tier to minimize unnecessary compute and storage costs. * Users can configure Intelligent-Tiering as the default storage class at the table bucket level using the AWS CLI commands `put-table-bucket-storage-class` and `get-table-bucket-storage-class`. ### Cross-Region and Cross-Account Replication New replication support allows users to maintain synchronized, read-only replicas of their S3 Tables across different geographic locations and ownership boundaries. * Replication maintains chronological consistency and preserves parent-child snapshot relationships, ensuring that replicas remain identical to the source for query purposes. * Replica tables are typically updated within minutes of changes to the source table and support independent encryption and retention policies to meet specific regional compliance requirements. * The service eliminates the need for complex, custom-built architectures to track metadata transformations or manually sync objects between Iceberg tables. * This functionality is primarily designed to reduce query latency for geographically distributed teams and provide robust data protection for disaster recovery scenarios. ### Practical Implementation To maximize the benefits of these new features, organizations should consider setting Intelligent-Tiering as the default storage class at the bucket level for all new datasets to ensure immediate cost savings. For global operations, setting up read-only replicas in regions closest to end-users will significantly improve query performance for analytics tools like Amazon Athena and Amazon SageMaker.

aws

Amazon S3 Storage Lens adds performance metrics, support for billions of prefixes, and export to S3 Tables (opens in new tab)

Amazon S3 Storage Lens has introduced three significant updates designed to provide deeper visibility into storage performance and usage patterns at scale. By adding dedicated performance metrics, support for billions of prefixes, and direct export capabilities to Amazon S3 Tables, AWS enables organizations to better optimize application latency and storage costs. These enhancements allow for more granular data-driven decisions across entire AWS organizations or specific high-performance workloads. ## Enhanced Performance Metric Categories The update introduces eight new performance-related metric categories available through the S3 Storage Lens advanced tier. These metrics are designed to pinpoint specific architectural bottlenecks that could impact application speed. * **Request and Storage Distributions:** New metrics track the distribution of read/write request sizes and object sizes, helping identify small-object patterns that might be better suited for Amazon S3 Express One Zone. * **Error and Latency Tracking:** Users can now monitor concurrent PUT 503 errors to identify throttling and analyze FirstByteLatency and TotalRequestLatency to measure end-to-end request performance. * **Data Transfer Efficiency:** Metrics for cross-Region data transfer help identify high-cost or high-latency data access patterns, suggesting where compute resources should be co-located with storage. * **Access Patterns:** Tracking unique objects accessed per day identifies "hot" datasets that could benefit from higher-performance storage tiers or caching solutions. ## Support for Billions of Prefixes S3 Storage Lens has expanded its analytical scale to support the monitoring of billions of prefixes. This allows organizations with massive, complex data structures to maintain granular visibility without sacrificing performance or detail. * **Granular Visibility:** Users can drill down into massive datasets to find specific prefixes causing performance degradation or cost spikes. * **Scalable Analysis:** This expansion ensures that even the largest data lakes can be monitored at a level of detail previously limited to smaller buckets. ## Integration with Amazon S3 Tables The service now supports direct export of storage metrics to Amazon S3 Tables, a feature optimized for high-performance analytics. This integration streamlines the workflow for administrators who need to perform complex queries on their storage metadata. * **Analytical Readiness:** Exporting to S3 Tables makes it easier to use SQL-based tools to query storage trends and performance over time. * **Automation:** This capability allows for the creation of automated reporting pipelines that can handle the massive volume of data generated by prefix-level monitoring. To take full advantage of these features, users should enable the S3 Storage Lens advanced tier and configure prefix-level monitoring for buckets containing mission-critical or high-throughput data. Organizations experiencing latency issues should specifically review the new request size distribution metrics to determine if batching objects or migrating to S3 Express One Zone would improve performance.

aws

New capabilities to optimize costs and improve scalability on Amazon RDS for SQL Server and Oracle (opens in new tab)

Amazon Web Services has introduced several key updates to Amazon RDS for SQL Server and Oracle designed to reduce operational overhead and licensing expenses. By integrating SQL Server Developer Edition and high-performance M7i/R7i instances with customizable CPU options, organizations can now scale their development and production environments more efficiently. These enhancements allow teams to mirror production features in testing environments and right-size resource allocation without the financial burden of traditional enterprise licensing. ### SQL Server Developer Edition for Non-Production Workloads * Amazon RDS now supports SQL Server Developer Edition, providing the full feature set of the Enterprise Edition at no licensing cost for development and testing environments. * The update allows for consistency across the database lifecycle, as developers can utilize RDS features such as automated backups, software updates, and encryption while testing Enterprise-level functionalities. * To deploy, users upload SQL Server binary files to Amazon S3; existing data can be migrated from Standard or Enterprise editions using native backup and restore operations. ### Performance and Licensing Optimization via M7i/R7i Instances * RDS for SQL Server now supports M7i and R7i instance types, which offer up to 55% lower costs compared to previous generation instances. * The billing structure for these instances provides improved transparency by separating Amazon RDS DB instance costs from software licensing fees. * The "Optimize CPU" capability allows users to customize the number of vCPUs on license-included instances, enabling them to reduce licensing costs while maintaining the high memory and storage performance of larger instance classes. ### Expanded Storage and Scalability for RDS * The updates include expanded storage capabilities for both Amazon RDS for Oracle and RDS for SQL Server to accommodate growing data requirements. * These enhancements are designed to support a wide range of workloads, providing flexibility for diverse compute and storage needs across development, testing, and production tiers. These updates represent a significant shift toward providing more granular control over database expenditures and performance. For organizations running heavy SQL Server or Oracle workloads, leveraging the Developer Edition for non-production tasks and migrating to M7i/R7i instances with optimized CPU settings can drastically reduce total cost of ownership while maintaining high scalability.

aws

Amazon S3 Vectors now generally available with increased scale and performance (opens in new tab)

Amazon S3 Vectors has reached general availability, establishing the first cloud object storage service with native support for storing and querying vector data. This serverless solution allows organizations to reduce total ownership costs by up to 90% compared to specialized vector database solutions while providing the performance required for production-grade AI applications. By integrating vector capabilities directly into S3, AWS enables a simplified architecture for retrieval-augmented generation (RAG), semantic search, and multi-agent workflows. ### Massive Scale and Index Consolidation The move to general availability introduces a significant increase in data capacity, allowing users to manage massive datasets without complex infrastructure workarounds. * **Increased Index Limits:** Each index can now store and search across up to 2 billion vectors, representing a 40x increase from the 50 million limit during the preview phase. * **Bucket Capacity:** A single vector bucket can now scale to house up to 20 trillion vectors. * **Simplified Architecture:** The increased scale per index removes the need for developers to shard data across multiple indexes or implement custom query federation logic. ### Performance and Latency Optimizations The service has been tuned to meet the low-latency requirements of interactive applications like conversational AI and real-time inference. * **Query Response Times:** Frequent queries now achieve latencies of approximately 100ms or less, while infrequent queries consistently return results in under one second. * **Enhanced Retrieval:** Users can now retrieve up to 100 search results per query (increased from 30), providing broader context for RAG applications. * **Write Throughput:** The system supports up to 1,000 PUT transactions per second for streaming single-vector updates, ensuring new data is immediately searchable. ### Serverless Efficiency and Ecosystem Integration S3 Vectors functions as a fully serverless offering, eliminating the need to provision or manage underlying instances while paying only for active storage and queries. * **Amazon Bedrock Integration:** It is now generally available as a vector storage engine for Bedrock Knowledge Bases, facilitating the building of RAG applications. * **OpenSearch Support:** Integration with Amazon OpenSearch allows users to utilize S3 Vectors for storage while leveraging OpenSearch for advanced analytics and search features. * **Expanded Footprint:** The service is now available in 14 AWS Regions, up from five during the preview period. With its massive scale and 90% cost reduction, S3 Vectors is a primary candidate for organizations looking to move AI prototypes into production. Developers should consider migrating high-volume vector workloads to S3 Vectors to benefit from the serverless operational model and the native integration with the broader AWS AI stack.