llm

59 posts

toss

Will developers be replaced by AI? (opens in new tab)

The current AI hype cycle is a significant economic bubble where massive infrastructure investments of $560 billion far outweigh the modest $35 billion in generated revenue. However, drawing parallels to the 1995 dot-com era, the author argues that while short-term expectations are overblown, the long-term transformation of the developer role is inevitable. The conclusion is that developers won't be replaced but will instead evolve into "Code Creative Directors" who manage AI through the lens of technical abstraction and delegation. ### The Economic Bubble and Amara’s Law * The industry is experiencing a 16:1 imbalance between AI investment and revenue, with 95% of generative AI implementations reportedly failing to deliver clear efficiency improvements. * Amara’s Law suggests that we are overestimating AI's short-term impact while potentially underestimating its long-term necessity. * Much of the current "AI-driven" job market contraction is actually a result of companies cutting personnel costs to fund expensive GPU infrastructure and AI research. ### Jevons Paradox and the Evolution of Roles * Jevons Paradox indicates that as the "cost" of producing code drops due to AI efficiency, the total demand for software and the complexity of systems will paradoxically increase. * The developer’s identity is shifting from "code producer" to "system architect," focusing on agent orchestration, result verification, and high-level design. * AI functions as a "power tool" similar to game engines, allowing small teams to achieve professional-grade output while amplifying the capabilities of senior engineers. ### Delegation as a Form of Abstraction * Delegating a task to AI is an act of "work abstraction," which involves choosing which low-level details a developer can afford to ignore. * The technical boundary of what is "hard to delegate" is constantly shifting; for example, a complex RAG (Retrieval-Augmented Generation) pipeline built for GPT-4 might become obsolete with the release of a more capable model like GPT-5. * The focus for developers must shift from "what is easy to delegate" to "what *should* be delegated," distinguishing between routine boilerplate and critical human judgment. ### The Risks of Premature Abstraction * Abstraction does not eliminate complexity; it simply moves it into the future. If the underlying assumptions of an AI-generated system change, the abstraction "leaks" or breaks. * Sudden shifts in scaling (traffic surges), regulation (GDPR updates), or security (zero-day vulnerabilities) expose the limitations of AI-delegated work, requiring senior intervention. * Poorly managed AI delegation can lead to "abstraction debt," where the cost of fixing a broken AI-generated system exceeds the cost of having written it manually from the start. To thrive in this environment, developers should embrace AI not as a replacement, but as a layer of abstraction. Success requires mastering the ability to define clear boundaries for AI—delegating routine CRUD operations and boilerplate while retaining human control over architecture, security, and complex business logic.

kakao

Kanana-2 Development Story (2 (opens in new tab)

Kakao’s development of the Kanana-2 model family represents a strategic shift toward Agentic AI, prioritizing complex reasoning and execution capabilities over simple conversational fluency. By implementing a sophisticated post-training pipeline—including a specialized Mid-training stage and refined reinforcement learning—the team successfully enhanced the model's instruction-following and tool-calling performance. This methodology ensures that the 30B parameter models excel in logical tasks and real-world agentic environments while maintaining high linguistic stability in both English and Korean. ## Mid-training and Catastrophic Forgetting Prevention * A 250B token Mid-training stage was introduced between Pre-training and Post-training to bridge the gap in reasoning, coding, and tool-calling capabilities. * The dataset comprised 200B tokens of high-quality reasoning data (Chain-of-Thought math and code) and 50B tokens of "replay" data from the original pre-training set. * This replay strategy specifically targeted "Catastrophic Forgetting," preventing the model from losing its Korean linguistic nuances and performance on benchmarks like KoMT-bench while it gained English-heavy reasoning skills. * Experimental results indicated that Mid-training serves as a foundational "force multiplier," leading to faster convergence and higher performance ceilings during subsequent Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL) stages. ## Enhanced Instruction Following and Tool Calling * To optimize for Agentic AI, the developers focused on Instruction Following (IFEval) by synthesizing high-quality, long-form responses that strictly adhere to complex constraints. * Tool-calling capabilities were improved using "Rejection Sampling" (Iterative SFT), where model-generated trajectories are validated in a real execution environment; only successful outcomes are retained for training. * The training data was categorized into distinct buckets—such as Chat, Math, Code, and Tool Calling—allowing for a more balanced recipe compared to previous Kanana versions. * This approach specifically addressed multi-turn and multi-tool scenarios, ensuring the model can handle the recursive logic required for autonomous agents. ## Parallel Reinforcement Learning and Calibration Tuning * A "Parallel RL" framework was adopted to optimize different capabilities simultaneously: the "Chat" track focused on helpfulness and safety, while the "Logic" track focused on accuracy in math and programming. * The pipeline moved beyond standard SFT to include Reinforcement Learning from Human Feedback (RLHF), utilizing DPO and PPO-style methods to align the model with human preferences. * A final "Calibration Tuning" step was implemented to ensure the model’s internal confidence levels match its actual accuracy, effectively reducing hallucinations and improving reliability in technical tasks. * Comparative benchmarks show that the Kanana-2 Instruct and Thinking models significantly outperform earlier versions and rival larger open-source models in reasoning and coding benchmarks like HumanEval and GSM8K. The Kanana-2 development cycle demonstrates that achieving "Agentic" performance requires more than just scaling data; it requires a structured transition from general language understanding to execution-verified reasoning. For organizations building AI agents, the Kanana-2 post-training recipe suggests that integrating environment-validated feedback and balancing reasoning data with foundational language "replays" is critical for creating reliable, multi-functional models.

line

Building an Enterprise LLM Service 1 (opens in new tab)

LY Corporation’s engineering team developed an AI assistant for their private cloud platform, Flava, by prioritizing "context engineering" over traditional prompt engineering. To manage a complex environment of 260 APIs and hundreds of technical documents, they implemented a strategy of progressive disclosure to ensure the LLM receives only the most relevant information for any given query. This approach allows the assistant to move beyond simple RAG-based document summarization to perform active diagnostics and resource management based on real-time API data. ### Performance Limitations of Long Contexts * Research indicates that LLM performance can drop by 13.9% to 85% as context length increases, even if the model technically supports a large token window. * The phenomenon of "context rot" occurs when low-quality or irrelevant information is mixed into the input, causing the model to generate confident but incorrect answers. * Because LLMs are stateless, maintaining conversation history and processing dense JSON responses from multiple APIs quickly exhausts context windows and degrades reasoning quality. ### Progressive Disclosure and Tool Selection * The system avoids loading all 260+ API definitions at once; instead, it analyzes the user's intent to select only the necessary tools, such as loading only Redis-related APIs when a user asks about a cluster. * Specific product usage hints, such as the distinction between private and CDN settings for Object Storage, are injected only when those specific services are invoked. * This phased approach significantly reduces token consumption and prevents the model from being overwhelmed by irrelevant technical specifications. ### Response Guidelines and the "Mock Tool Message" Strategy * The team distinguished between "System Prompts" (global rules) and "Response Guidelines" (situational instructions), such as directing users to a console UI before suggesting CLI commands. * Injecting specific guidelines into the system prompt often caused "instruction conflict," where the LLM might hallucinate information to satisfy a guideline while ignoring core requirements like using search tools. * To resolve these conflicts, the team utilized "ToolMessages" to inject guidelines; by formatting instructions as if they were results from a tool execution, the LLM treats the information as factual context rather than a command that might override the system prompt. To build a robust enterprise LLM service, developers should focus on dynamic context management rather than static prompt optimization. Treating operational guidelines as external data via mock tool messages, rather than system instructions, provides a scalable way to reduce hallucinations and maintain high performance across hundreds of integrated services.

toss

Automating Service Vulnerability Analysis Using (opens in new tab)

Toss has developed a high-precision automated vulnerability analysis system by integrating Large Language Models (LLMs) with traditional security testing tools. By evolving their architecture from a simple prompt-based approach to a multi-agent system utilizing open-source models and static analysis, the team achieved over 95% accuracy in threat detection. This project demonstrates that moving beyond a technical proof-of-concept requires solving real-world constraints such as context window limits, output consistency, and long-term financial sustainability. ### Navigating Large Codebases with MCP * Initial attempts to use RAG (Retrieval Augmented Generation) and repository compression tools failed because the LLM could not maintain complex code relationships within token limits. * The team implemented a "SourceCode Browse MCP" (Model Context Protocol) which allows the LLM agent to dynamically query the codebase. * By indexing the code, the agent can perform specific tool calls to find function definitions or variable usages only when necessary, effectively bypassing context window restrictions. ### Ensuring Consistency via SAST Integration * Testing revealed that standalone LLMs produced inconsistent results, often missing known vulnerabilities or generating hallucinations across different runs. * To solve this, the team integrated Semgrep, a Static Application Security Testing (SAST) tool, to identify all potential "Source-to-Sink" paths. * Semgrep was chosen over CodeQL due to its lighter resource footprint and faster execution, acting as a structured roadmap that ensures the LLM analyzes every suspicious input path without omission. ### Optimizing Costs with Multi-Agent Architectures * Analyzing every possible code path identified by SAST tools was prohibitively expensive due to high token consumption. * The workflow was divided among three specialized agents: a Discovery Agent to filter out irrelevant paths, an Analysis Agent to perform deep logic checks, and a Verification Agent to confirm findings. * This "sieve" strategy ensured that the most resource-intensive analysis was only performed on high-probability vulnerabilities, significantly reducing operational costs. ### Transitioning to Open Models for Sustainability * Scaling the system to hundreds of services and daily PRs made proprietary cloud models financially unviable. * After benchmarking models like Llama 3.1 and GPT-OSS, the team selected **Qwen3:30B** for its 100% coverage rate and high true-positive accuracy in vulnerability detection. * To bridge the performance gap between open-source and proprietary models, the team utilized advanced prompt engineering, one-shot learning, and enforced structured JSON outputs to improve reliability. To build a production-ready AI security tool, teams should focus on the synergy between specialized open-source models and traditional static analysis tools. This hybrid approach provides a cost-effective and sustainable way to achieve enterprise-grade accuracy while maintaining full control over the analysis infrastructure.

daangn

The Journey to Karrot Pay’ (opens in new tab)

Daangn Pay has evolved its Fraud Detection System (FDS) from a traditional rule-based architecture to a sophisticated AI-powered framework to better protect user assets and combat evolving financial scams. By implementing a modular rule engine and integrating Large Language Models (LLMs), the platform has significantly reduced manual review times and improved its response to emerging fraud trends. This transition allows for consistent, context-aware risk assessment while maintaining compliance with strict financial regulations. ### Modular Rule Engine Architecture * The system is built on a "Lego-like" structure consisting of three components: Conditions (basic units like account age or transfer frequency), Rules (logical combinations of conditions), and Policies (groups of rules with specific sanction levels). * This modularity allows non-developers to adjust thresholds—such as changing a "30-day membership" requirement to "70 days"—in real-time to respond to sudden shifts in fraud patterns. * Data flows through two distinct paths: a Synchronous API for immediate blocking decisions (e.g., during a live transfer) and an Asynchronous Stream for high-volume, real-time monitoring where slight latency is acceptable. ### Risk Evaluation and Post-Processing * Events undergo a structured pipeline beginning with ingestion, followed by multi-layered evaluation through the rule engine to determine the final risk score. * The post-processing phase incorporates LLM analysis to evaluate behavioral context, which is then used to trigger alerts for human operators or apply automated user sanctions. * Implementation of this engine led to a measurable decrease in information requests from financial and investigative authorities, indicating a higher rate of internal prevention. ### LLM Integration for Contextual Analysis * To solve the inconsistency and time lag of manual reviews—which previously took between 5 and 20 minutes per case—Daangn Pay integrated Claude 3.5 Sonnet via AWS Bedrock. * The system overcomes strict financial "network isolation" regulations by utilizing an "Innovative Financial Service" designation, allowing the use of cloud-based generative AI within a regulated environment. * The technical implementation uses a specialized data collector that pulls fraud history from BigQuery into a Redis cache to build structured, multi-step prompts for the LLM. * The AI provides evaluations in a structured JSON format, assessing whether a transaction is fraudulent based on specific criteria and providing the reasoning behind the decision. The combination of a flexible, rule-based foundation and context-aware LLM analysis demonstrates how fintech companies can scale security operations. For organizations facing high-volume fraud, the modular approach ensures immediate technical agility, while AI integration provides the nuanced judgment necessary to handle complex social engineering tactics.

daangn

Karrot's Gen (opens in new tab)

Daangn has scaled its Generative AI capabilities from a few initial experiments to hundreds of diverse use cases by building a robust, centralized internal infrastructure. By abstracting model complexity and empowering non-technical stakeholders, the company has optimized API management, cost tracking, and rapid product iteration. The resulting platform ecosystem allows the organization to focus on delivering product value while minimizing the operational overhead of managing fragmented AI services. ### Centralized API Management via LLM Router Initially, Daangn faced challenges with fragmented API keys, inconsistent rate limits across teams, and the inability to track total costs across multiple providers like OpenAI, Anthropic, and Google. The LLM Router was developed as an "AI Gateway" to consolidate these resources into a single point of access. * **Unified Authentication:** Service teams no longer manage individual API keys; they use a unique Service ID to access models through the router. * **Standardized Interface:** The router uses the OpenAI SDK as a standard interface, allowing developers to switch between models (e.g., from Claude to GPT) by simply changing the model name in the code without rewriting implementation logic. * **Observability and Cost Control:** Every request is tracked by service ID, enabling the infrastructure team to monitor usage limits and integrate costs directly into the company’s internal billing platform. ### Empowering Non-Engineers with Prompt Studio To remove the bottleneck of needing an engineer for every prompt adjustment, Daangn built Prompt Studio, a web-based platform for prompt engineering and testing. This tool enables PMs and other non-developers to iterate on AI features independently. * **No-Code Experimentation:** Users can write prompts, select models (including internally served vLLM models), and compare outputs side-by-side in a browser-based UI. * **Batch Evaluation:** The platform includes an Evaluation feature that allows users to upload thousands of test cases to quantitatively measure how prompt changes impact output quality across different scenarios. * **Direct Deployment:** Once a prompt is finalized, it can be deployed via API with a single click. Engineers only need to integrate the Prompt Studio API once, after which non-engineers can update the prompt or model version without further code changes. ### Ensuring Service Reliability and Stability Because third-party AI APIs can be unstable or subject to regional outages, the platform incorporates several safety mechanisms to ensure that user-facing features remain functional even during provider downtime. * **Automated Retries:** The system automatically identifies retry-able errors and re-executes requests to mitigate temporary API failures. * **Region Fallback:** To bypass localized outages or rate limits, the platform can automatically route requests to different geographic regions or alternative providers to maintain service continuity. ### Recommendation For organizations scaling AI adoption, the Daangn model suggests that investing early in a centralized gateway and a no-code prompt management environment is essential. This approach not only secures API management and controls costs but also democratizes AI development, allowing product teams to experiment at a pace that is impossible when tied to traditional software release cycles.

google

Google Research 2025: Bolder breakthroughs, bigger impact (opens in new tab)

Google Research in 2025 has shifted toward an accelerated "Magic Cycle" that rapidly translates foundational breakthroughs into real-world applications across science, society, and consumer products. By prioritizing model efficiency, factuality, and agentic capabilities, the organization is moving beyond static text generation toward interactive, multi-modal systems that solve complex global challenges. This evolution is underpinned by a commitment to responsible AI development, ensuring that new technologies like quantum computing and generative UI are both safe and culturally inclusive. ## Enhancing Model Efficiency and Factuality * Google introduced new efficiency-focused techniques like block verification (an evolution of speculative decoding) and the LAVA scheduling algorithm, which optimizes resource allocation in large cloud data centers. * The Gemini 3 model achieved state-of-the-art results on factuality benchmarks, including SimpleQA Verified and the newly released FACTS benchmark suite, by emphasizing grounded world knowledge. * Research into Retrieval Augmented Generation (RAG) led to the development of the LLM Re-Ranker in Vertex AI, which helps models determine if they possess sufficient context to provide accurate answers. * The Gemma open model expanded to support over 140 languages, supported by the TUNA taxonomy and the Amplify initiative to improve socio-cultural intelligence and data representation. ## Interactive Experiences through Generative UI * A novel implementation of generative UI allows Gemini 3 to dynamically create visual interfaces, web pages, and tools in response to user prompts rather than providing static text. * This technology is powered by specialized models like "Gemini 3-interactive," which are trained to output structured code and design elements. * These capabilities have been integrated into AI Mode within Google Search, allowing for more immersive and customizable user journeys. ## Advanced Architectures and Agentic AI * Google is exploring hybrid model architectures, such as Jamba-style models that combine State Space Models (SSMs) with traditional attention mechanisms to handle long contexts more efficiently. * The development of agentic AI focuses on models that can reason, plan, and use tools, exemplified by Project Astra, a prototype for a universal AI agent. * Specialized models like Gemini 3-code have been optimized to act as autonomous collaborators for software developers, assisting in complex coding tasks and system design. ## AI for Science and Planetary Health * In biology, research teams utilized AI to map human heart and brain structures and employed RoseTTAFold-Diffusion to design new proteins for therapeutic use. * The NeuralGCM model has revolutionized Earth sciences by combining traditional physics with machine learning for faster, more accurate weather and climate forecasting. * Environmental initiatives include the FireSat satellite constellation for global wildfire detection and the expansion of AI-driven flood forecasting and contrail mitigation. ## Quantum Computing and Responsible AI * Google achieved significant milestones in quantum error correction, developing low-overhead codes that bring the industry closer to a reliable, large-scale quantum computer. * Security and safety remain central, with the expansion of SynthID—a watermarking tool for AI-generated text, audio, and video—to help users identify synthetic content. * The team continues to refine the Secure AI Framework (SAIF) to defend against emerging threats while promoting the safe deployment of generative media models like Veo and Imagen. To maximize the impact of these advancements, organizations should focus on integrating agentic workflows and RAG-based architectures to ensure their AI implementations are both factual and capable of performing multi-step tasks. Developers can leverage the Gemma open models to build culturally aware applications that scale across diverse global markets.

line

Safety is a Given, Cost Reduction (opens in new tab)

AI developers often rely on system prompts to enforce safety rules, but this integrated approach frequently leads to "over-refusal" and unpredictable shifts in model performance. To ensure both security and operational efficiency, it is increasingly necessary to decouple safety mechanisms into separate guardrail systems that operate independently of the primary model's logic. ## Negative Impact on Model Utility * Integrating safety instructions directly into system prompts often leads to a high False Positive Rate (FPR), where the model rejects harmless requests alongside harmful ones. * Technical analysis using Principal Component Analysis (PCA) reveals that guardrail prompts shift the model's embedding results in a consistent direction toward refusal, regardless of the input's actual intent. * Studies show that aggressive safety prompting can cause models to refuse benign technical queries—such as "how to kill a Python process"—because the model adopts an overly conservative decision boundary. ## Positional Bias and Context Neglect * Research on the "Lost in the Middle" phenomenon indicates that LLMs are most sensitive to information at the beginning and end of a prompt, while accuracy drops significantly for information placed in the center. * The "Constraint Difficulty Distribution Index" (CDDI) demonstrates that the order of instructions matters; models generally follow instructions better when difficult constraints are placed at the beginning of the prompt. * In complex system prompts where safety rules are buried in the middle, the model may fail to prioritize these guardrails, leading to inconsistent safety enforcement depending on the prompt's structure. ## The Butterfly Effect of Prompt Alterations * Small, seemingly insignificant changes to a system prompt—such as adding a single whitespace, a "Thank you" note, or changing the output format to JSON—can alter more than 10% of a model's predictions. * Modifying safety-related lines within a unified system prompt can cause "catastrophic performance collapse," where the model's internal reasoning path is diverted, affecting unrelated tasks. * Because LLMs treat every part of the prompt as a signal that moves their decision boundaries, managing safety and task logic in a single string makes the system brittle and difficult to iterate upon. To build robust and high-performing AI applications, developers should move away from bloated system prompts and instead implement external guardrails. This modular approach allows for precise security filtering without compromising the model's creative or logical capabilities.

google

Gemini provides automated feedback for theoretical computer scientists at STOC 2026 (opens in new tab)

Google Research launched an experimental program for the STOC 2026 conference using a specialized Gemini model to provide automated, rigorous feedback on theoretical computer science submissions. By identifying critical logical errors and proof gaps within a 24-hour window, the tool demonstrated that advanced AI can serve as a powerful pre-vetting collaborator for high-level mathematical research. The overwhelmingly positive reception from authors indicates that AI can effectively augment the human peer-review process by improving paper quality before formal submission. ## Advanced Reasoning via Inference Scaling - The tool utilized an advanced version of Gemini 2.5 Deep Think specifically optimized for mathematical rigor. - It employed inference scaling methods, allowing the model to explore and combine multiple possible solutions and reasoning traces simultaneously. - This non-linear approach to problem-solving helps the model focus on the most salient technical issues while significantly reducing the likelihood of hallucinations. ## Structured Technical Feedback - Feedback was delivered in a structured format that included a high-level summary of the paper's core contributions. - The model provided a detailed analysis of potential mistakes, specifically targeting errors within lemmas, theorems, and logical proofs. - Authors also received a categorized list of minor corrections, such as inconsistent variable naming and typographical errors. ## Identified Technical Issues and Impact - The pilot saw high engagement, with over 80% of STOC 2026 submitters opting in for the AI-generated review. - The tool successfully identified "critical bugs" and calculation errors that had previously evaded human authors for months. - Survey results showed that 97% of participants found the feedback helpful, and 81% reported that the tool improved the overall clarity and readability of their work. ## Expert Verification and Hallucinations - Because the users were domain experts, they were able to act as a filter, distinguishing between deep technical insights and occasional model hallucinations. - While the model sometimes struggled to parse complex notation or interpret figures, authors valued the "neutral tone" and the speed of the two-day turnaround. - The feedback was used as a starting point for human verification, allowing researchers to refine their arguments rather than blindly following the model's output. ## Future Outlook and Educational Potential - Beyond professional research, 75% of surveyed authors see significant educational value in using the tool to train students in mathematical rigor. - The experiment's success has led to 88% of participants expressing interest in having continuous access to such a tool throughout their entire research and drafting process. The success of the STOC 2026 pilot suggests that researchers should consider integrating specialized LLMs early in the drafting phase to catch "embarrassing" or logic-breaking errors. While the human expert remains the final arbiter of truth, these tools provide a necessary layer of automated verification that can accelerate the pace of scientific discovery.

google

A differentially private framework for gaining insights into AI chatbot use (opens in new tab)

Google Research has introduced Urania, a novel framework designed to extract high-level usage insights from AI chatbot conversations while maintaining rigorous differential privacy (DP) guarantees. Unlike previous heuristic methods that rely on simple redaction or LLM-based PII stripping, this pipeline ensures that no individual user's data can be reconstructed from the resulting summaries. By combining DP clustering and keyword extraction with LLM-based summarization, the system provides a formal, auditable approach to understanding platform trends without compromising sensitive information. ## Limitations of Heuristic Privacy * Existing frameworks often rely on large language models to manually strip personally identifiable information (PII) from text before analysis. * These heuristic protections are difficult to formalize or audit, and their effectiveness may diminish as models evolve or face sophisticated prompt injection attacks. * The Urania framework addresses these weaknesses by using mathematical privacy budgets (the epsilon parameter) to measure and limit the influence of any single user's data on the final output. ## The Differentially Private Pipeline * **DP Clustering**: The framework first converts conversation data into numerical embeddings. These are grouped using a DP clustering algorithm, ensuring that cluster centers reflect broad trends rather than specific individual inputs. * **DP Keyword Extraction**: The system identifies keywords for each cluster and generates a histogram of their frequency. By adding mathematical noise to these counts, the framework masks individual contributions and ensures that only keywords common to many users are retained. * **Keyword Generation Methods**: The researchers explored three methods for extraction: LLM-guided selection of relevant terms, a differentially private version of TF-IDF, and an LLM-guided approach that selects terms from a pre-defined list of public keywords. * **LLM Summarization**: In the final stage, an LLM generates a high-level summary of the cluster using only the noisy, anonymized keywords. Because the LLM never sees the raw conversation text, the "post-processing" property of DP guarantees that the final summary remains private. ## Privacy and Utility Trade-offs * The framework was tested against a non-private baseline (Simple-CLIO) to evaluate how privacy constraints affect the quality of the insights generated. * Stronger privacy settings (lower epsilon values) inherently result in a utility trade-off, as the added noise can obscure some niche usage patterns. * Despite these trade-offs, the framework provides a robust defense against data leakage, as the summarization model is structurally prevented from seeing sensitive original text, making it resilient to prompt injection. This framework offers a scalable way for platform providers to analyze chatbot usage patterns and enforce safety policies while providing mathematical certainty regarding user privacy. For organizations handling sensitive conversation data, moving from heuristic redaction to formal DP pipelines like Urania provides a more robust and auditable path for service improvement.

naver

I'm an LL (opens in new tab)

Processing complex PDF documents remains a significant bottleneck for Large Language Models (LLMs) due to the intricate layouts, nested tables, and visual charts that standard text extractors often fail to capture. To address this, NAVER developed PaLADIN, an LLM-friendly PDF parser designed to transform visual document elements into structured data that models can accurately interpret. By combining specialized vision models with advanced OCR, the system enables high-fidelity document understanding for demanding tasks like analyzing financial reports. ### Challenges in Document Intelligence * Standard PDF parsing often loses the semantic structure of the document, such as the relationship between headers and body text. * Tables and charts pose the greatest difficulty, as numerical values and trends must be extracted without losing the spatial context that defines their meaning. * A "one-size-fits-all" approach to text extraction results in "hallucinations" when LLMs attempt to reconstruct data from fragmented strings. ### The PaLADIN Architecture and Model Integration * **Element Detection:** The system utilizes `Doclayout-Yolo` to identify and categorize document components like text blocks, titles, tables, and figures. * **Table Extraction:** Visual table structures are processed through `nemoretriever-table-structure-v1`, ensuring that cell boundaries and headers are preserved. * **Chart Interpretation:** To convert visual charts into descriptive text or data, the parser employs `google/gemma3-27b-it`, allowing the LLM to "read" visual trends. * **Text Recognition:** For high-accuracy character recognition, particularly in multi-lingual contexts, the pipeline integrates NAVER’s `Papago OCR`. * **Infrastructure:** The architecture leverages `nv-ingest` for optimized throughput and speed, making it suitable for large-scale document processing. ### Evaluation and Real-world Application * **Performance Metrics:** NAVER established a dedicated parsing evaluation set to measure accuracy across diverse document types, focusing on speed and structural integrity. * **AIB Securities Reports:** The parser is currently applied to summarize complex stock market reports, where precision in numerical data is critical. * **LLM-as-a-Judge:** To ensure summary quality, the system uses an automated evaluation framework where a high-performing LLM judges the accuracy of the generated summaries against the parsed source data. For organizations building RAG (Retrieval-Augmented Generation) systems, the transition from basic text extraction to a layout-aware parsing pipeline like PaLADIN is crucial. Future improvements focusing on table cell coordinate precision and more granular chart analysis will further reduce the error rates in automated document processing.

aws

New serverless customization in Amazon SageMaker AI accelerates model fine-tuning | AWS News Blog (opens in new tab)

Amazon SageMaker AI has introduced a new serverless customization capability designed to accelerate the fine-tuning of popular models like Llama, DeepSeek, and Amazon Nova. By automating resource provisioning and providing an intuitive interface for advanced reinforcement learning techniques, this feature reduces the model customization lifecycle from months to days. This end-to-end workflow allows developers to focus on model performance rather than infrastructure management, from initial training through to final deployment. **Automated Infrastructure and Model Support** * The service provides a serverless environment where SageMaker AI automatically selects and provisions compute resources based on the specific model architecture and dataset size. * Supported models include a broad range of high-performance options such as Amazon Nova, DeepSeek, GPT-OSS, Meta Llama, and Qwen. * The feature is accessible directly through the Amazon SageMaker Studio interface, allowing users to manage their entire model catalog in one location. **Advanced Customization and Reinforcement Learning** * Users can choose from several fine-tuning techniques, including traditional Supervised Fine-Tuning (SFT) and more advanced methods. * The platform supports modern optimization techniques such as Direct Preference Optimization (DPO), Reinforcement Learning from Verifiable Rewards (RLVR), and Reinforcement Learning from AI Feedback (RLAIF). * To simplify the process, SageMaker AI provides recommended defaults for hyperparameters like batch size, learning rate, and epochs based on the selected tuning technique. **Experiment Tracking and Security** * The workflow introduces a serverless MLflow application, enabling seamless experiment tracking and performance monitoring without additional setup. * Advanced configuration options allow for fine-grained control over network encryption and storage volume encryption to ensure data security. * The "Continue customization" feature allows for iterative tuning, where users can adjust hyperparameters or apply different techniques to an existing customized model. **Evaluation and Deployment Flexibility** * Built-in evaluation tools allow developers to compare the performance of their customized models against the original base models to verify improvements. * Once a model is finalized, it can be deployed with a few clicks to either Amazon SageMaker or Amazon Bedrock. * A centralized "My Models" dashboard tracks all custom iterations, providing detailed logs and status updates for every training and evaluation job. This serverless approach is highly recommended for teams that need to adapt large language models to specific domains quickly without the operational overhead of managing GPU clusters. By utilizing the integrated evaluation and multi-platform deployment options, organizations can transition from experimentation to production-ready AI more efficiently.

aws

Amazon Bedrock adds 18 fully managed open weight models, including the new Mistral Large 3 and Ministral 3 models | AWS News Blog (opens in new tab)

Amazon Bedrock has significantly expanded its generative AI offerings by adding 18 new fully managed open-weight models from providers including Google, Mistral AI, NVIDIA, and OpenAI. This update brings the platform's total to nearly 100 serverless models, allowing developers to leverage a broad spectrum of specialized capabilities through a single, unified API. By providing access to these high-performing models without requiring infrastructure changes, AWS enables organizations to rapidly evaluate and deploy the most cost-effective and capable tools for their specific workloads. ### Specialized Mistral AI Releases The launch features four new models from Mistral AI, headlined by Mistral Large 3 and the edge-optimized Ministral series. * **Mistral Large 3:** Optimized for long-context tasks, multimodal reasoning, and instruction reliability, making it suitable for complex coding assistance and multilingual enterprise knowledge work. * **Ministral 3 (3B, 8B, and 14B):** These models are specifically designed for edge-optimized deployments on a single GPU. * **Use Cases:** While the 3B model excels at real-time translation and data extraction on low-resource devices, the 14B version is built for advanced local agentic workflows where privacy and hardware constraints are primary concerns. ### Broadened Model Provider Portfolio Beyond the Mistral updates, AWS has integrated several other open-weight options to address diverse industry requirements ranging from mobile applications to global scaling. * **Google Gemma 3 4B:** An efficient multimodal model designed to run locally on laptops, supporting on-device AI and multilingual processing. * **Global Provider Support:** The expansion includes models from MiniMax AI, Moonshot AI, NVIDIA, OpenAI, and Qwen, ensuring a competitive variety of reasoning and processing capabilities. * **Multimodal Capabilities:** Many of the new additions support vision-based tasks, such as image captioning and document understanding, alongside traditional text-based functions. ### Streamlined AI Development and Integration The primary technical advantage of this update is the ability to swap between diverse models using the Amazon Bedrock unified API. * **Infrastructure Consistency:** Developers can switch to newer, more efficient models without rewriting application code or managing underlying servers. * **Evaluation and Deployment:** The serverless architecture allows for immediate testing of different model weights (such as moving from 3B to 14B) to find the optimal balance between performance and latency. * **Enterprise Tooling:** These models integrate with existing Bedrock features, allowing for simplified agentic workflows and tool-use implementations. To take full advantage of these updates, developers should utilize the Bedrock console to experiment with the new Mistral and Gemma models for edge and multimodal use cases. The unified API structure makes it practical to run A/B tests between these open-weight models and established industry favorites to optimize for specific cost and performance targets.

naver

Recreating the User's (opens in new tab)

The development of NSona, an LLM-based multi-agent persona platform, addresses the persistent gap between user research and service implementation by transforming static data into real-time collaborative resources. By recreating user voices through a multi-party dialogue system, the project demonstrates how AI can serve as an active participant in the daily design and development process. Ultimately, the initiative highlights a fundamental shift in cross-functional collaboration, where traditional role boundaries dissolve in favor of a shared starting point centered on AI-driven user empathy. ## Bridging UX Research and Daily Collaboration * The project was born from the realization that traditional UX research often remains isolated from the actual development cycle, leading to a loss of insight during implementation. * NSona transforms static user research data into dynamic "persona bots" that can interact with project members in real-time. * The platform aims to turn the user voice into a "live" resource, allowing designers and developers to consult the persona during the decision-making process. ## Agent-Centric Engineering and Multi-Party UX * The system architecture is built on an agent-centric structure designed to handle the complexities of specific user behaviors and motivations. * It utilizes a Multi-Party dialogue framework, enabling a collaborative environment where multiple AI agents and human stakeholders can converse simultaneously. * Technical implementation focused on bridging the gap between qualitative UX requirements and LLM orchestration, ensuring the persona's responses remained grounded in actual research data. ## Service-Specific Evaluation and Quality Metrics * The team moved beyond generic LLM benchmarks to establish a "Service-specific" evaluation process tailored to the project's unique UX goals. * Model quality was measured by how vividly and accurately it recreated the intended persona, focusing on the degree of "immersion" it triggered in human users. * Insights from these evaluations helped refine the prompt design and agent logic to ensure the AI's output provided genuine value to the product development lifecycle. ## Redefining Cross-Functional Collaboration * The AI development process reshaped traditional Roles and Responsibilities (RNR); designers became prompt engineers, while researchers translated qualitative logic into agentic structures. * Front-end developers evolved their roles to act as critical reviewers of the AI, treating the model as a subject of critique rather than a static asset. * The workflow shifted from a linear "relay" model to a concentric one, where all team members influence the product's core from the same starting point. To successfully integrate AI into the product lifecycle, organizations should move beyond using LLMs as simple tools and instead view them as a medium for interdisciplinary collaboration. By building multi-agent systems that reflect real user data, teams can ensure that the "user's voice" is not just a research summary, but a tangible participant in the development process.

naver

FE News - December 202 (opens in new tab)

The December 2025 FE News highlights a significant shift in front-end development where the dominance of React is being cemented by LLM training cycles, even as the browser platform begins to absorb core framework functionalities. It explores the evolution of WebAssembly beyond its name and Vercel’s vision for managing distributed systems through language-level abstractions. Ultimately, the industry is moving toward a convergence of native web standards and AI-driven development paradigms that prioritize collective intelligence and simplified architectures. ### Clarifying the Identity of WebAssembly * Wasm is frequently misunderstood as a web-only assembly language, but it functions more like a platform-agnostic bytecode similar to JVM or .NET. * The name "WebAssembly" was originally a strategic choice for project funding rather than an accurate technical description of its capabilities or intended environment. ### The LLM Feedback Loop and React’s Dominance * The "dead framework theory" suggests that because LLM tools like Replit and Bolt hardcode React into system prompts, the framework has reached a state of perpetual self-reinforcement. * With over 13 million React sites deployed in the last year, new frameworks face a 12-18 month lag to be included in LLM training data, making it nearly impossible for competitors to disrupt React's current platform status. ### Vercel and the Evolution of Programming Abstractions * Vercel is integrating complex distributed system management directly into the development experience via directives like `Server Actions`, `use cache`, and `use workflow`. * These features are built on serializable closures, algebraic effects, and incremental computation, moving complexity from external libraries into the native language structure. ### Native Browser APIs vs. Third-Party Frameworks * Modern web standards, including Shadow DOM, ES Modules, and the Navigation and View Transitions APIs, are now capable of handling routing and state management natively. * This transition allows for high-performance application development with reduced bundle sizes, as the browser platform takes over responsibilities previously exclusive to heavy frameworks. ### LLM Council: Collective AI Decision Making * Andrej Karpathy’s LLM Council is a local web application that utilizes a three-stage process—independent suggestion, peer review, and final synthesis—to overcome the limitations of single AI models. * The system utilizes the OpenRouter API to combine the strengths of various models, such as GPT-5.1 and Claude Sonnet 4.5, using a stack built on Python (FastAPI) and React with Vite. Developers should focus on mastering native browser APIs as they become more capable while recognizing that React’s ecosystem remains the most robust choice for AI-integrated workflows. Additionally, exploring multi-model consensus systems like the LLM Council can provide more reliable results for complex technical decision-making than relying on a single AI provider.