Making Space for a Handmade Web | Figma Blog (opens in new tab)
The new competitive moat is emotion Insights Thought leadership Product management
The new competitive moat is emotion Insights Thought leadership Product management
Welcome to The Prompt Inside Figma AI Culture Thought leadership Writing Design Engineering Config
Pablo Sánchez’s 7 rules for designing the unexpected Maker Stories Design UI/UX Brainstorming Thought leadership Profiles & interviews Tips & inspiration
Building a digital-first future for every student Inside Figma Career & education Config Design thinking Collaboration News
At Ignite, Microsoft just announced Team customizations and imaging for Microsoft Dev Box. The goal for this feature is to improve developer productivity and happiness by reducing the time it takes to setup and maintain development environments. Team customizations began as an i…
Making Figma work better for freelancers and agencies As part of a broader effort to refine our billing model, we are sharing our roadmap to better address the needs of independent freelancers and agencies. Inside Figma Product updates Config News
Issue no.7: Building better Working Well The Long & Short of It
Figma 2024: We shipped it, you shaped it Inside Figma Product updates News From flagship features to little big updates, we take a look at how you shaped what we shipped in 2024. Your work, front and center with UI3 Fine-tuning the fundamentals AI features to keep you in your fl…
Charmaine Lee’s 10 rules for building developer tools that feel like magic Maker Stories Product management Engineering 3D design Brainstorming Thought leadership Design Culture Profiles & interviews Tips & inspiration Collaboration
Design system 101: What is a design system? Insights Design systems Career & education Design UI/UX Productivity
A year with Dev Mode: 10 lessons from an engineering manager After integrating Dev Mode into our design system workflow, I’ve gathered several insights to help teams navigate this powerful tool. Here are my top takeaways. Working Well Design systems Engineering Plugins & tooling…
Coupang’s Finance and Engineering teams collaborated to optimize cloud expenditures by focusing on resource efficiency and the company's "Hate Waste" leadership principle. Through a dedicated optimization project team and the implementation of data-driven analytics, the company successfully reduced on-demand costs by millions of dollars without compromising business growth. This initiative transformed cloud management from a reactive expense into a proactive engineering culture centered on financial accountability and technical efficiency. ### Forming the Optimization Project Team * A specialized team consisting of Cloud Infrastructure Engineers and Technical Program Managers (TPMs) was established to bridge the gap between finance and engineering. * The project team focused on educating domain teams about the variable cost model of cloud services, moving away from a fixed-cost mindset. * Technical experts helped domain teams identify opportunities to use cost-efficient technologies, such as ARM-based AWS Graviton processors and AWS Spot Instances for data processing. * The initiative established clear ownership, ensuring that each domain team understood and managed their specific cloud resource usage. ### Analytics and Dashboards for Visibility * Engineers developed custom dashboards using Amazon Athena to process Amazon CloudWatch data, providing deep insights into resource performance. * The team utilized AWS Cost & Usage Reports (CUR) within internal Business Intelligence (BI) tools to provide granular visibility into spending patterns. * Finance teams worked alongside engineers to align technical roadmaps with monthly and quarterly budget goals, making cost management a shared responsibility. ### Strategies for Usage and Cost Reduction * **Spend Less (Usage Reduction):** Coupang implemented automation to ensure that non-production environment resources were only active when needed, resulting in a 25% cost saving for those environments. * **Pay Less (Right-sizing):** The team analyzed usage patterns to manually identify and decommission unused EC2 resources across all domain teams. * **Instance and Storage Optimization:** The project prioritized migrating workloads to the latest instance generations and optimizing Amazon S3 storage structures to reduce costs for data at rest. To achieve sustainable cloud efficiency, organizations should move beyond simple monitoring and foster an engineering culture where resource management is a core technical discipline. Prioritizing automated resource scheduling and adopting modern, high-efficiency hardware like Graviton instances are essential steps for any large-scale cloud operation looking to maximize its return on investment.
Coupang has developed an internal SCM Workflow platform to streamline the complex data and operational needs of its Supply Chain Management team. By implementing low-code and no-code functionalities, the platform enables developers, data scientists, and business analysts to build data pipelines and launch services without the traditional bottlenecks of manual development. ### Addressing Inefficiencies in SCM Data Management * The SCM team manages a massive network of suppliers and fulfillment centers (FCs) where demand forecasting and inventory distribution require constant data feedback. * Traditionally, non-technical stakeholders like business analysts (BAs) relied heavily on developers to build or modify data pipelines, leading to high communication costs and slower response times to changing business requirements. * The new platform aims to simplify the complexity found in traditional tools like Jenkins, Airflow, and Jupyter Notebooks, providing a unified interface for data creation and visualization. ### Democratizing Access with the No-code Data Builder * The "Data Builder" allows users to perform data queries, extraction, and system integration through a visual interface rather than writing backend code. * It provides seamless access to a wide array of data sources used across Coupang, including Redshift, Hive, Presto, Aurora, MySQL, Elasticsearch, and S3. * Users can construct workflows by creating "nodes" for specific tasks—such as extracting inventory data from Hive or calculating transfer quantities—and linking them together to automate complex decisions like inter-center product transfers. ### Expanding Capabilities through Low-code Service Building * The platform functions as a "Service Builder," allowing users to expand domains and launch simple services without building entirely new infrastructure from scratch. * This approach enables developers to focus on high-level algorithm development while allowing data scientists to apply and test new models directly within the production environment. * By reducing the need for code changes to reflect new requirements, the platform significantly increases the agility of the SCM pipeline. Organizations managing complex, data-driven ecosystems can significantly reduce operational friction by adopting low-code/no-code platforms. Empowering non-technical stakeholders to handle data processing and service integration not only accelerates innovation but also allows engineering resources to be redirected toward core architectural challenges.
Coupang has implemented a machine learning-based prediction system to optimize its logistics inbound process by accurately forecasting the number of trucks required for product deliveries. By analyzing historical logistics data and vendor characteristics, the system minimizes resource waste at fulfillment center docks and prevents operational delays caused by slot shortages. This data-driven approach ensures that limited dock slots are allocated efficiently, improving overall supply chain speed and reliability. ### Challenges in Inbound Logistics * Fulfillment centers operate with a fixed number of "docks" for unloading and specific time "slots" assigned to each truck. * Inaccurate predictions create a resource dilemma: under-estimating slots causes unloading delays and backlogs, while over-estimating leads to idle docks and wasted capacity. * The goal was to move beyond manual estimation to an automated system that balances vendor requirements with actual facility throughput. ### Feature Engineering and Data Collection * The team performed Exploratory Data Analysis (EDA) on approximately 800,000 instances of inbound data collected over two years. * In-depth interviews with domain experts and logistics managers were conducted to identify hidden patterns and qualitative factors that influence truck requirements. * Final feature sets were refined through feature engineering, focusing on vendor-specific behaviors and the physical characteristics of the products being delivered. ### LightGBM Implementation and Optimization * The LightGBM algorithm was selected due to its high performance with large datasets and its efficiency in handling categorical features. * The model utilizes a leaf-wise tree growth strategy, which allows for faster training speeds and lower loss compared to traditional level-wise growth algorithms. * Hyperparameters were optimized using Bayesian Optimization, a method that finds the most effective model configurations more efficiently than traditional grid search methods. * The trained model is integrated directly into the booking system, providing real-time truck quantity recommendations to vendors during the application process. ### Operational Trade-offs and Results * The system must navigate the trade-off between under-prediction (which risks logistical bottlenecks) and over-prediction (which risks resource waste). * By automating the prediction of necessary slots, Coupang has reduced the manual workload for vendors and improved the accuracy of fulfillment center scheduling. * This optimization allows for more products to be processed in a shorter time frame, directly contributing to faster delivery times for the end customer. By replacing manual estimates with a LightGBM-based predictive model, Coupang has successfully synchronized vendor deliveries with fulfillment center capacity. This technical shift not only maximizes dock utilization but also builds a more resilient and scalable inbound supply chain.
Coupang’s internal Machine Learning (ML) platform serves as a standardized ecosystem designed to accelerate the transition from experimental research to stable production services. By centralizing core functions like automated pipelines, feature engineering, and scalable inference, the platform addresses the operational complexities of managing ML at an enterprise scale. This infrastructure allows engineers to focus on model innovation rather than manual resource management, ultimately driving efficiency across Coupang’s diverse service offerings. ### Addressing Scalability and Development Bottlenecks * The platform aims to drastically reduce "Time to Market" by providing "ready-to-use" services that eliminate the need for engineers to build custom infrastructure for every model. * Integrating Continuous Integration and Continuous Deployment (CI/CD) into the ML lifecycle ensures that updates to data, code, and models are handled with the same rigor as traditional software engineering. * By optimizing ML computing resources, the platform allows for the efficient scaling of training and inference workloads, preventing infrastructure costs from spiraling as the number of models grows. ### Core Services of the ML Platform * **Notebooks and Pipelines:** Integrated Jupyter environments allow for ad-hoc exploration, while workflow orchestration tools enable the construction of reproducible ML pipelines. * **Feature Engineering:** A dedicated feature store facilitates the reuse of data components and ensures consistency between the features used during model training and those used in real-time inference. * **Scalable Training and Inference:** The platform provides dedicated clusters for high-performance model training and robust hosting services for real-time and batch model predictions. * **Monitoring and Observability:** Automated tools track model performance and data drift in production, alerting engineers when a model’s accuracy begins to degrade due to changing real-world data. ### Real-World Success in Search and Pricing * **Search Query Understanding:** The platform enabled the training of Ko-BERT (Korean Bidirectional Encoder Representations from Transformers), significantly improving the accuracy of search results by better understanding customer intent. * **Real-time Dynamic Pricing:** Using the platform’s low-latency inference services, Coupang can predict and adjust product prices in real-time based on fluctuating market conditions and inventory levels. To maintain a competitive edge in e-commerce, organizations should transition away from fragmented, ad-hoc ML workflows toward a unified platform that treats ML as a first-class citizen of the software development lifecycle. Investing in such a platform not only speeds up deployment but also ensures the long-term reliability and observability of production models.