당근 / ai

2 posts

daangn

당근페이 AI Powered FDS로 가는 여정: 룰엔진구축부터 LLM 적용까지 (opens in new tab)

Daangn Pay has evolved its Fraud Detection System (FDS) from a traditional rule-based architecture to a sophisticated AI-powered framework to better protect user assets and combat evolving financial scams. By implementing a modular rule engine and integrating Large Language Models (LLMs), the platform has significantly reduced manual review times and improved its response to emerging fraud trends. This transition allows for consistent, context-aware risk assessment while maintaining compliance with strict financial regulations. ### Modular Rule Engine Architecture * The system is built on a "Lego-like" structure consisting of three components: Conditions (basic units like account age or transfer frequency), Rules (logical combinations of conditions), and Policies (groups of rules with specific sanction levels). * This modularity allows non-developers to adjust thresholds—such as changing a "30-day membership" requirement to "70 days"—in real-time to respond to sudden shifts in fraud patterns. * Data flows through two distinct paths: a Synchronous API for immediate blocking decisions (e.g., during a live transfer) and an Asynchronous Stream for high-volume, real-time monitoring where slight latency is acceptable. ### Risk Evaluation and Post-Processing * Events undergo a structured pipeline beginning with ingestion, followed by multi-layered evaluation through the rule engine to determine the final risk score. * The post-processing phase incorporates LLM analysis to evaluate behavioral context, which is then used to trigger alerts for human operators or apply automated user sanctions. * Implementation of this engine led to a measurable decrease in information requests from financial and investigative authorities, indicating a higher rate of internal prevention. ### LLM Integration for Contextual Analysis * To solve the inconsistency and time lag of manual reviews—which previously took between 5 and 20 minutes per case—Daangn Pay integrated Claude 3.5 Sonnet via AWS Bedrock. * The system overcomes strict financial "network isolation" regulations by utilizing an "Innovative Financial Service" designation, allowing the use of cloud-based generative AI within a regulated environment. * The technical implementation uses a specialized data collector that pulls fraud history from BigQuery into a Redis cache to build structured, multi-step prompts for the LLM. * The AI provides evaluations in a structured JSON format, assessing whether a transaction is fraudulent based on specific criteria and providing the reasoning behind the decision. The combination of a flexible, rule-based foundation and context-aware LLM analysis demonstrates how fintech companies can scale security operations. For organizations facing high-volume fraud, the modular approach ensures immediate technical agility, while AI integration provides the nuanced judgment necessary to handle complex social engineering tactics.

daangn

당근의 GenAI 플랫폼 (opens in new tab)

Daangn has scaled its Generative AI capabilities from a few initial experiments to hundreds of diverse use cases by building a robust, centralized internal infrastructure. By abstracting model complexity and empowering non-technical stakeholders, the company has optimized API management, cost tracking, and rapid product iteration. The resulting platform ecosystem allows the organization to focus on delivering product value while minimizing the operational overhead of managing fragmented AI services. ### Centralized API Management via LLM Router Initially, Daangn faced challenges with fragmented API keys, inconsistent rate limits across teams, and the inability to track total costs across multiple providers like OpenAI, Anthropic, and Google. The LLM Router was developed as an "AI Gateway" to consolidate these resources into a single point of access. * **Unified Authentication:** Service teams no longer manage individual API keys; they use a unique Service ID to access models through the router. * **Standardized Interface:** The router uses the OpenAI SDK as a standard interface, allowing developers to switch between models (e.g., from Claude to GPT) by simply changing the model name in the code without rewriting implementation logic. * **Observability and Cost Control:** Every request is tracked by service ID, enabling the infrastructure team to monitor usage limits and integrate costs directly into the company’s internal billing platform. ### Empowering Non-Engineers with Prompt Studio To remove the bottleneck of needing an engineer for every prompt adjustment, Daangn built Prompt Studio, a web-based platform for prompt engineering and testing. This tool enables PMs and other non-developers to iterate on AI features independently. * **No-Code Experimentation:** Users can write prompts, select models (including internally served vLLM models), and compare outputs side-by-side in a browser-based UI. * **Batch Evaluation:** The platform includes an Evaluation feature that allows users to upload thousands of test cases to quantitatively measure how prompt changes impact output quality across different scenarios. * **Direct Deployment:** Once a prompt is finalized, it can be deployed via API with a single click. Engineers only need to integrate the Prompt Studio API once, after which non-engineers can update the prompt or model version without further code changes. ### Ensuring Service Reliability and Stability Because third-party AI APIs can be unstable or subject to regional outages, the platform incorporates several safety mechanisms to ensure that user-facing features remain functional even during provider downtime. * **Automated Retries:** The system automatically identifies retry-able errors and re-executes requests to mitigate temporary API failures. * **Region Fallback:** To bypass localized outages or rate limits, the platform can automatically route requests to different geographic regions or alternative providers to maintain service continuity. ### Recommendation For organizations scaling AI adoption, the Daangn model suggests that investing early in a centralized gateway and a no-code prompt management environment is essential. This approach not only secures API management and controls costs but also democratizes AI development, allowing product teams to experiment at a pace that is impossible when tied to traditional software release cycles.