라인 / ai

16 posts

line

엔터프라이즈 LLM 서비스 구축기 1: 컨텍스트 엔지니어링 (opens in new tab)

LY Corporation’s engineering team developed an AI assistant for their private cloud platform, Flava, by prioritizing "context engineering" over traditional prompt engineering. To manage a complex environment of 260 APIs and hundreds of technical documents, they implemented a strategy of progressive disclosure to ensure the LLM receives only the most relevant information for any given query. This approach allows the assistant to move beyond simple RAG-based document summarization to perform active diagnostics and resource management based on real-time API data. ### Performance Limitations of Long Contexts * Research indicates that LLM performance can drop by 13.9% to 85% as context length increases, even if the model technically supports a large token window. * The phenomenon of "context rot" occurs when low-quality or irrelevant information is mixed into the input, causing the model to generate confident but incorrect answers. * Because LLMs are stateless, maintaining conversation history and processing dense JSON responses from multiple APIs quickly exhausts context windows and degrades reasoning quality. ### Progressive Disclosure and Tool Selection * The system avoids loading all 260+ API definitions at once; instead, it analyzes the user's intent to select only the necessary tools, such as loading only Redis-related APIs when a user asks about a cluster. * Specific product usage hints, such as the distinction between private and CDN settings for Object Storage, are injected only when those specific services are invoked. * This phased approach significantly reduces token consumption and prevents the model from being overwhelmed by irrelevant technical specifications. ### Response Guidelines and the "Mock Tool Message" Strategy * The team distinguished between "System Prompts" (global rules) and "Response Guidelines" (situational instructions), such as directing users to a console UI before suggesting CLI commands. * Injecting specific guidelines into the system prompt often caused "instruction conflict," where the LLM might hallucinate information to satisfy a guideline while ignoring core requirements like using search tools. * To resolve these conflicts, the team utilized "ToolMessages" to inject guidelines; by formatting instructions as if they were results from a tool execution, the LLM treats the information as factual context rather than a command that might override the system prompt. To build a robust enterprise LLM service, developers should focus on dynamic context management rather than static prompt optimization. Treating operational guidelines as external data via mock tool messages, rather than system instructions, provides a scalable way to reduce hallucinations and maintain high performance across hundreds of integrated services.

line

입사 일주일 만에 일본 출장을? LINE Plus Developer Relations 뉴비의 바쁜 적응기 (opens in new tab)

Joining the Developer Relations (DevRel) team at LINE Plus, a new employee was immediately thrust into a high-stakes business trip to Japan just one week after onboarding to support major global tech events. This immersive experience allowed the recruit to rapidly grasp the company’s engineering culture by facilitating cross-border collaboration and managing large-scale technical conferences. Ultimately, the journey highlights how a proactive onboarding strategy and a culture of creative freedom enable DevRel professionals to bridge the gap between complex engineering feats and community engagement. ### Global Collaboration at Tech Week * The trip centered on participating in **Tech-Verse**, a global conference featuring simultaneous interpretation in Korean, English, and Japanese, where the focus was on maintaining operational detail across diverse technical sessions. * Operational support was provided for **Hack Day**, an in-house hackathon that brought together engineers from various countries to collaborate on rapid prototyping and technical problem-solving. * The experience facilitated direct coordination with DevRel teams from Japan, Thailand, Taiwan, and Vietnam, establishing a unified approach to technical branding and regional community support. * Post-event responsibilities included translating live experiences into digital assets, such as "Shorts" video content and technical blog recaps, to maintain engagement after the physical event concluded. ### Modernizing Internal Technical Sharing * The **Tech Talk** series, a long-standing tradition with over 78 sessions, was used as a platform to experiment with "B-grade" humorous marketing—including quirky posters and cup holders—to drive offline participation in a remote-friendly work environment. * To address engineer feedback, the format shifted from passive lectures to **hands-on practical sessions** focusing on AI implementation. * Specific technical workshops demonstrated how to use tools like **Claude Code** and **ChatGPT** to automate workflows, such as generating weekly reports by integrating **Jira tickets with internal Wikis**. * Preparation for these sessions involved creating detailed environment setup guides and troubleshooting protocols to ensure a seamless experience for participating developers. ### Scaling AI Literacy via AI Campus Day * The **AI Campus Day** was a large-scale event designed for over 3,000 participants, aimed at lowering the barrier to entry for AI adoption across all departments. * The "Event & Operation" role involved creating interactive AI photo zones using **Gemini** to familiarize employees with new internal AI tools in a low-pressure setting. * Event production utilized AI-driven assets, including AI-generated voices and icons, to demonstrate the practical utility of these tools within standard business communication and video guides. * The success of the event relied on "participation design," ensuring that even non-technical staff could engage with AI concepts through hands-on play and peer mentoring. For organizations looking to strengthen their technical culture, this experience suggests that integrating new hires into high-impact global projects immediately can be a powerful onboarding tool. Providing DevRel teams the psychological safety to experiment with unconventional marketing and hands-on technical workshops is essential for maintaining developer engagement in a hybrid work era.

line

사내 AI 리터러시를 향상하기 위한 AI Campus Day를 개최했습니다 (opens in new tab)

LY Corporation recently hosted "AI Campus Day," a large-scale internal event designed to bridge the gap between AI theory and practical workplace application for over 3,000 employees. By transforming their office into a learning campus, the company successfully fostered a culture of "AI Transformation" through peer-led mentorship and task-specific experimentation. The event demonstrated that internal context and hands-on participation are far more effective than traditional external lectures for driving meaningful AI literacy and productivity gains. ## Hands-on Experience and Technical Support * The curriculum featured 10 specialized sessions across three tracks—Common, Creative, and Engineering—to ensure relevance for every job function. * Sessions ranged from foundational prompt engineering for non-developers to advanced technical topics like building Model Context Protocol (MCP) servers for engineers. * To ensure smooth execution, the organizers provided comprehensive "Session Guides" containing pre-configured account settings and specific prompt templates. * The event utilized a high support ratio, with 26 teaching assistants (TAs) available to troubleshoot technical hurdles in real-time and dedicated Slack channels for sharing live AI outputs. ## Peer-Led Mentorship and Internal Context * Instead of hiring external consultants, the program featured 10 internal "AI Mentors" who shared how they integrated AI into their actual daily workflows at LY Corporation. * Training focused exclusively on company-approved tools, including ChatGPT Enterprise, Gemini, and Claude Code, ensuring all demonstrations complied with internal security protocols. * Internal mentors were able to provide specific "company context" that external lecturers lack, such as integrating AI with existing proprietary systems and data. * A rigorous three-stage quality control process—initial flow review, final end-to-end dry run, and technical rehearsal—was implemented to ensure the educational quality of mentor-led sessions. ## Gamification and Cultural Engagement * The event was framed as a "festival" rather than a mandatory training, using campus-themed motifs like "enrollment" and "school attendance" to reduce psychological barriers. * A "Stamp Rally" system encouraged participation by offering tiered rewards, including welcome kits, refreshments, and subscriptions to premium AI tools. * Interactive exhibition booths allowed employees to experience AI utility firsthand, such as an AI photo zone using Gemini to generate "campus-style" portraits and an AI Agent Contest booth. * Strong executive support played a crucial role, with leadership encouraging staff to pause routine tasks for the day to focus entirely on AI experimentation and "playing" with new technologies. To effectively scale AI literacy within a large organization, it is recommended to move away from passive, one-size-fits-all lectures. Success lies in leveraging internal experts who understand the specific security and operational constraints of the business, and creating a low-pressure environment where employees can experiment with hands-on tasks relevant to their specific roles.

line

안전은 기본, 비용 절감은 덤: AI 서비스에 별도 가드레일이 필요한 이유 (opens in new tab)

AI developers often rely on system prompts to enforce safety rules, but this integrated approach frequently leads to "over-refusal" and unpredictable shifts in model performance. To ensure both security and operational efficiency, it is increasingly necessary to decouple safety mechanisms into separate guardrail systems that operate independently of the primary model's logic. ## Negative Impact on Model Utility * Integrating safety instructions directly into system prompts often leads to a high False Positive Rate (FPR), where the model rejects harmless requests alongside harmful ones. * Technical analysis using Principal Component Analysis (PCA) reveals that guardrail prompts shift the model's embedding results in a consistent direction toward refusal, regardless of the input's actual intent. * Studies show that aggressive safety prompting can cause models to refuse benign technical queries—such as "how to kill a Python process"—because the model adopts an overly conservative decision boundary. ## Positional Bias and Context Neglect * Research on the "Lost in the Middle" phenomenon indicates that LLMs are most sensitive to information at the beginning and end of a prompt, while accuracy drops significantly for information placed in the center. * The "Constraint Difficulty Distribution Index" (CDDI) demonstrates that the order of instructions matters; models generally follow instructions better when difficult constraints are placed at the beginning of the prompt. * In complex system prompts where safety rules are buried in the middle, the model may fail to prioritize these guardrails, leading to inconsistent safety enforcement depending on the prompt's structure. ## The Butterfly Effect of Prompt Alterations * Small, seemingly insignificant changes to a system prompt—such as adding a single whitespace, a "Thank you" note, or changing the output format to JSON—can alter more than 10% of a model's predictions. * Modifying safety-related lines within a unified system prompt can cause "catastrophic performance collapse," where the model's internal reasoning path is diverted, affecting unrelated tasks. * Because LLMs treat every part of the prompt as a signal that moves their decision boundaries, managing safety and task logic in a single string makes the system brittle and difficult to iterate upon. To build robust and high-performing AI applications, developers should move away from bloated system prompts and instead implement external guardrails. This modular approach allows for precise security filtering without compromising the model's creative or logical capabilities.

line

AI 제품 개발 중 마주칠 수 있는 보안 위협 사례와 대책 방안 (opens in new tab)

Developing AI products introduces unique security vulnerabilities that extend beyond traditional software risks, ranging from package hallucinations to sophisticated indirect prompt injections. To mitigate these threats, organizations must move away from trusting LLM-generated content and instead implement rigorous validation, automated threat modeling, and input/output guardrails. The following summary details the specific risks and mitigation strategies identified by LY Corporation’s security engineering team. ## Slopsquatting and Package Hallucinations - AI models frequently hallucinate non-existent library or package names when providing coding instructions (e.g., suggesting `huggingface-cli` instead of the correct `huggingface_hub[cli]`). - Attackers exploit this by registering these hallucinated names on public registries to distribute malware to unsuspecting developers. - Mitigation requires developers to manually verify all AI-suggested commands and dependencies before execution in any environment. ## Prompt Injection and Arbitrary Code Execution - As seen in CVE-2024-5565 (Vanna AI), attackers can inject malicious instructions into prompts to force the application to execute arbitrary code. - This vulnerability arises when developers grant LLMs the autonomy to generate and run logic within the application context without sufficient isolation. - Mitigation involves treating LLM outputs as untrusted data, sanitizing user inputs, and strictly limiting the LLM's ability to execute system-level commands. ## Indirect Prompt Injection in Integrated AI - AI assistants integrated into office environments (like Gemini for Workspace) are susceptible to indirect prompt injections hidden within emails or documents. - A malicious email can contain "system-like" instructions that trick the AI into hiding content, redirecting users to phishing sites, or leaking data from other files. - Mitigation requires the implementation of robust guardrails that scan both the input data (the content being processed) and the generated output for instructional anomalies. ## Permission Risks in AI Agents and MCP - The use of Model Context Protocol (MCP) and coding agents creates risks where an agent might overstep its intended scope. - If an agent has broad access to a developer's environment, a malicious prompt in a public repository could trick the agent into accessing or leaking sensitive data (such as salary info or private keys) from a private repository. - Mitigation centers on the principle of least privilege, ensuring AI agents are restricted to specific, scoped directories and repositories. ## Embedding Inversion and Vector Store Vulnerabilities - Attacks targeting the retrieval phase of RAG (Retrieval-Augmented Generation) systems can lead to data leaks. - Embedding Inversion techniques may allow attackers to reconstruct original sensitive text from the vector embeddings stored in a database. - Securing AI products requires protecting the integrity of the vector store and ensuring that retrieved context does not bypass security filters. ## Automated Security Assessment Tools - To scale security, LY Corporation is developing internal tools like "ConA" for automated threat modeling and "LAVA" for automated vulnerability assessment. - These tools aim to identify AI-specific risks during the design and development phases rather than relying solely on manual reviews. Effective AI security requires a shift in mindset: treat every LLM response as a potential security risk. Developers should adopt automated threat modeling and implement strict input/output validation layers to protect both the application infrastructure and user data from evolving AI-based exploits.

line

한 달짜리 과제, 바이브 코딩으로 5일 만에!(ChatGPT·Cursor) (opens in new tab)

This blog post explores how LY Corporation reduced a month-long development task to just five days by leveraging "vibe coding" with Generative AI tools like ChatGPT and Cursor. By shifting from traditional, rigid documentation to an iterative, demo-first approach, developers can rapidly validate multiple UI/UX solutions for complex problems like restaurant menu registration. The author concludes that AI's ability to handle frequent re-work makes it more efficient to "build fast and iterate" than to aim for perfection through long-form specifications. ### Strategic Shift to Rapid Prototyping * Traditional development cycles (spec → design → dev → fix) are often too slow to keep up with market trends due to heavy documentation and impact analysis. * The "vibe coding" approach prioritizes creating "working demos" over perfect specifications to find "good enough" answers through rapid feedback loops. * AI reduces the psychological and logistical burden of "starting over," allowing developers to refine the context and quality of outputs through repeated interaction without the friction of manual re-documentation. ### Defining Requirements and Solution Ideation * Initial requirements are kept minimal, focusing only on the core mission, top priorities, and essential data structures (e.g., product name, image, description) to avoid limiting AI creativity. * ChatGPT is used to generate a wide range of solution candidates, which are then filtered into five distinct approaches: Stepper Wizards, Live Previews with Quick Add, Template/Cloning, Chat Input, and OCR-based photo scanning. * This stage emphasizes volume and variety, using AI-generated pros and cons to establish selection criteria and identify potential UX bottlenecks early in the process. ### Detailed Design and Multi-Solution Wireframing * Each of the five chosen solutions is expanded into detailed screen flows and UI elements, such as progress bars, bottom sheets, and validation logic. * Prompt engineering is used iteratively; if an AI-generated result lacks a specific feature like "temporary storage" or "mandatory field validation," the prompt is adjusted to regenerate the design instantly. * The focus remains on defining the "what" (UI elements) and "how" (user flow) through textual descriptions before moving to actual coding. ### Implementation with Cursor and Flutter * Cursor is utilized to generate functional code based on the refined wireframes, using Flutter as the framework to ensure rapid cross-platform development for both iOS and Android. * The development follows a "skeleton-first" approach: first creating a main navigation hub with five entry points, then populating each individual solution module one by one. * Technical architecture decisions, such as using Riverpod for state management or SQLite for data storage, are layered onto the demo post-hoc, reversing the traditional "stack-first" development order to prioritize functional validation. ### Recommendation To maximize efficiency, developers should treat AI as a partner for high-speed iteration rather than a one-shot tool. By focusing on creating functional demos quickly and refining them through direct feedback, teams can bypass the bottlenecks of traditional software requirements and deliver user-centric products in a fraction of the time.

line

IUI 2025 참관기: AI의 지속성과 인간 중심의 AI에 대해서 (opens in new tab)

The IUI 2025 conference highlighted a significant shift in the AI landscape, moving away from a sole focus on model performance toward "human-centered AI" that prioritizes collaboration, ethics, and user agency. The prevailing consensus across key sessions suggests that for AI to be sustainable and trustworthy, it must transcend simple automation to become a tool that augments human perception and decision-making through transparent, interactive, and socially aware design. ## Reality Design and Human Augmentation The concept of "Reality Design" suggests that Human-Computer Interaction (HCI) research must expand beyond screen-based interfaces to design reality itself. As AI, sensors, and wearables become integrated into daily life, technology can be used to directly augment human perception, cognition, and memory. * Memory extension: Systems can record and reconstruct personal experiences, helping users recall details in educational or professional settings. * Sensory augmentation: Technologies like selective hearing or slow-motion visual playback can enhance a user's natural observational powers. * Cognitive balance: While AI can assist with task difficulty (e.g., collaborative Lego building), designers must ensure that automation does not erode the human will to learn or remember, echoing historical warnings about technology-induced "forgetfulness." ## Bridging the Socio-technical Gap in AI Transparency Transparency in AI, particularly for high-risk areas like finance or medicine, should not be limited to showing mathematical model weights. Instead, it must bridge the gap between technical complexity and human understanding by focusing on user goals and social contexts. * Multi-faceted communication: Effective transparency involves model reporting (Model Cards), sharing safety evaluation results, and providing linguistic or visual cues for uncertainty rather than just numerical scores. * Counterfactual explanations: Users gain better trust when they can see how a decision might have changed if specific input conditions were different. * Interaction-based transparency: Transparency must be coupled with control, allowing users to act as "adjusters" who provide feedback that the model then reflects in its future outputs. ## Interactive Machine Learning and Human-in-the-Loop The framework of Interactive Machine Learning (IML) challenges the traditional view of AI as a static black box trained on fixed data. Instead, it proposes an interactive loop where the user and the model grow together through continuous feedback. * User-driven training: Users should be able to inspect model classifications, correct errors, and have those corrections immediately influence the model's learning path. * Beyond automation: This approach reframes AI from a replacement for human labor into a collaborative partner that adapts to specific user behaviors and professional expertise. * Impact on specialized tools: Modern applications include educational platforms where students manipulate data directly and research tools that integrate human intuition into large-scale data analysis. ## Collaborative Systems in Specialized Professional Contexts Practical applications of human-centered AI are being realized in sensitive fields like child counseling, where AI assists experts without replacing the human element. * Counselor-AI transcription: Systems designed for counseling analysis allow AI to handle the heavy lifting of transcription while counselors manage the nuance and contextual editing. * Efficiency through partnership: By focusing on reducing administrative burdens, these systems enable professionals to spend more time on high-level cognitive tasks and emotional support, demonstrating the value of AI as a supportive infrastructure. The future of AI development requires moving beyond isolated technical optimization to embrace the complexity of the human experience. Organizations and developers should focus on creating systems where transparency is a tool for "appropriate trust" and where design is focused on empowering human capabilities rather than simply automating them.

line

PD1 AI 해커톤, 그 뜨거웠던 열기 속으로! (opens in new tab)

The PD1 AI Hackathon 2025 served as a strategic initiative by LY Corporation to embed innovative artificial intelligence directly into the LINE messaging ecosystem. Over 60 developers collaborated during an intensive 48-hour session to transition AI from a theoretical concept into practical features for messaging, content, and internal development workflows. The event successfully produced several high-utility prototypes that demonstrate how AI can enhance user safety, creative expression, and technical productivity. ## Transforming Voice Communication through NextVoIP * The "NextVoIP" project utilized Speech-to-Text (STT) technology to convert 1:1 and group call audio into real-time data for AI analysis. * The system was designed to provide life security features by detecting potential emergency situations or accidents through conversation monitoring. * AI acted as a communication assistant by suggesting relevant content and conversation topics to help maintain a seamless flow during calls. * Features were implemented to allow callers to enjoy shared digital content together, enriched by AI-driven recommendations. ## Creative Expression with MELODY LINE * This project focused on the intersection of technology and art by converting chat conversations into unique musical compositions. * The system analyzed the context and emotional sentiment of messages to automatically generate melodies that matched the atmosphere of the chat. * The implementation showcased the potential for generative AI to provide a multi-sensory experience within a standard messaging interface. ## AI-Driven QA and Test Automation * The grand prize-winning project, "IPD," addressed the bottleneck of repetitive manual testing by automating the entire Quality Assurance lifecycle. * AI was utilized to automatically generate and manage complex test cases, significantly reducing the manual effort required for mobile app validation. * The system included automated test execution and a diagnostic feature that identifies the root cause of failures when a test results in an error. * The project was specifically lauded for its immediate "production-ready" status, offering a direct path to improving development speed and software reliability. The results of this hackathon suggest that the most immediate value for AI in large-scale messaging platforms lies in two areas: enhancing user experience through contextual awareness and streamlining internal engineering via automated QA. Organizations should look toward integrating AI-driven testing tools to reduce technical debt while exploring real-time audio and text analysis to provide proactive security and engagement features for users.

line

오픈챗 메시지들로부터 트렌딩 키워드 추출하기 (opens in new tab)

To enhance user engagement on the LINE OpenChat main screen, LY Corporation developed a system to extract and surface "trending keywords" from real-time message data. By shifting focus from chat room recommendations to content-driven keyword clusters, the team addresses the lack of context in individual messages while providing a more dynamic discovery experience. This approach utilizes a combination of statistical Z-tests to identify frequency spikes and MinHash clustering to eliminate near-duplicate content, ensuring that the trending topics are both relevant and diverse. **The Shift from Chat Rooms to Content-Driven Recommendations** * Traditional recommendations focus on entire chat rooms, which often require significant user effort to investigate and evaluate. * Inspired by micro-blogging services, the team aimed to surface messages as individual content pieces to increase the "main screen visit" KPI. * Because individual chat messages are often fragmented or full of typos, the system groups them by keywords to create meaningful thematic content. **Statistical Detection of Trending Keywords** * Simple frequency counts are ineffective because they capture common social fillers like greetings or expressions of gratitude rather than actual trends. * Trends are defined as keywords showing a sharp increase in frequency compared to a baseline from seven days prior. * The system uses a Z-test for two-sample proportions to assign a score to each word, filtering for terms with at least a 30% frequency growth. * A seven-day comparison window is specifically used to suppress weekly cyclical noise (e.g., mentions of "weekend") and to capture topics whose popularity peaks over several consecutive days. **MinHash-based Message Deduplication** * Redundant messages, such as copy-pasted text, are removed prior to frequency aggregation to prevent skewed results and repetitive user experiences. * The system employs MinHash, a dimensionality reduction technique, to identify near-duplicate messages based on Jaccard similarity. * The process involves "shingling" messages into sets of tokens (primarily nouns) and generating $k$-length signatures; messages with identical signatures are clustered together. * To evaluate the efficiency of these clusters without high computational costs, the team developed a "SetDiv" (Set Diversity) metric that operates in linear time complexity. By combining Z-test statistical modeling with MinHash deduplication, this methodology successfully transforms fragmented chat data into a structured discovery layer. For developers working with high-volume social data, using a rolling weekly baseline and signature-based clustering offers a scalable way to surface high-velocity trends while filtering out both routine social noise and repetitive content.

line

LY Corporation의 AI 기술의 현재, Tech-Verse 2025 후기 (opens in new tab)

Tech-Verse 2025 showcased LY Corporation’s strategic shift toward an AI-integrated ecosystem following the merger of LINE and Yahoo Japan. The event focused on the practical hurdles of deploying generative AI, concluding that the transition from experimental models to production-ready services requires sophisticated evaluation frameworks and deep contextual integration into developer workflows. ## AI-Driven Engineering with Ark Developer LY Corporation’s internal "Ark Developer" solution demonstrates how AI can be embedded directly into the software development life cycle. * The system utilizes a Retrieval-Augmented Generation (RAG) based code assistant to handle tasks such as code completion, security reviews, and automated test generation. * Rather than treating codebases as simple text documents, the tool performs graph analysis on directory structures to maintain structural context during code synthesis. * Real-world application includes a seamless integration with GitHub for automated Pull Request (PR) creation, with internal users reporting higher satisfaction compared to off-the-shelf tools like GitHub Copilot. ## Quantifying Quality in Generative AI A significant portion of the technical discussion centered on moving away from subjective "vibes-based" assessments toward rigorous, multi-faceted evaluation of AI outputs. * To measure the quality of generated images, developers utilized traditional metrics like Fréchet Inception Distance (FID) and Inception Score (IS) alongside LAION’s Aesthetic Score. * Advanced evaluation techniques were introduced, including CLIP-IQA, Q-Align, and Visual Question Answering (VQA) based on video-language models to analyze image accuracy. * Technical challenges in image translation and inpainting were highlighted, specifically the difficulty of restoring layout and text structures naturally after optical character recognition (OCR) and translation. ## Global Technical Exchange and Implementation The conference served as a collaborative hub for engineers across Japan, Taiwan, and Korea to discuss the implementation of emerging standards like the Model Context Protocol (MCP). * Sessions emphasized the "how-to" of overcoming deployment hurdles rather than just following technical trends. * Poster sessions (Product Street) and interactive Q&A segments allowed developers to share localized insights on LLM agent performance and agentic workflows. * The recurring theme across diverse teams was that the "evaluation and verification" stage is now the primary driver of quality in generative AI services. For organizations looking to scale AI, the key recommendation is to move beyond simple implementation and invest in "evaluation-driven development." By building internal tools that leverage graph-based context and quantitative metrics like Aesthetic Scores and VQA, teams can ensure that generative outputs meet professional service standards.

line

자네, 해커가 되지 않겠나? Hack Day 2025에 다녀왔습니다! (opens in new tab)

Hack Day 2025 serves as a cornerstone of LY Corporation’s engineering culture, bringing together diverse global teams to innovate beyond their daily operational scopes. By fostering a high-intensity environment focused on creative freedom, the event facilitates technical growth and strengthens interpersonal bonds across international branches. This 19th edition demonstrated how rapid prototyping and cross-functional collaboration can transform abstract ideas into functional AI-driven prototypes within a strict 24-hour window. ### Structure and Participation Dynamics * The hackathon follows a "9 to 9" format, providing exactly 24 hours of development time followed by a day for presentations and awards. * Participation is inclusive of all roles, including developers, designers, planners, and HR staff, allowing for holistic product development. * Teams can be "General Teams" from the same legal entity or "Global Mixed Teams" comprising members from different regions like Korea, Japan, Taiwan, and Vietnam. * The Developer Relations (DevRel) team facilitates team building for remote employees using digital collaboration tools like Zoom and Miro. ### AI-Powered Personality Analysis Project * The author's team developed a "Scouter" program inspired by Dragon Ball, designed to measure professional "combat power" based on communication history. * The system utilizes Slack bots and AI models to analyze message logs and map them to the Big 5 Personality traits (Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism). * Professional metrics are visualized as game-like character statistics to make personality insights engaging and less intimidating. * While the original plan involved using AI to generate and print physical character cards, hardware failures with photo printers forced a technical pivot to digital file downloads. ### High-Pressure Presentation and Networking * Every team is allotted a strict 90-second window to pitch their product and demonstrate a live demo. * The "90-second rule" includes a mandatory microphone cutoff to maintain momentum and keep the large-scale event engaging for all attendees. * Dedicated booth sessions follow the presentations, allowing participants to provide hands-on experiences to colleagues and judges. * The event emphasizes "Perfect the Details," a core company value, by encouraging teams to utilize all available resources—from whiteboards to AI image generators—within the time limit. ### Environmental Support and Culture * The event occupies an entire office floor, providing a high-density yet comfortable environment designed to minimize distractions during the "Hack Time." * Cultural exchange is encouraged through "humanity snacks," where participants from different global offices share local treats in dedicated rest areas. * Strategic scheduling, such as "Travel Days" for international participants, ensures that teams can focus entirely on technical execution once the event begins. Participating in internal hackathons provides a vital platform for testing new technologies—like LLMs and personality modeling—that may not fit into immediate product roadmaps. For organizations with hybrid work models, these intensive in-person events are highly recommended to bridge the communication gap and build lasting trust between global teammates.

line

LY의 테크 컨퍼런스, 'Tech-Verse 2025' 후기 (opens in new tab)

LY Corporation’s Tech-Verse 2025 conference highlighted the company's strategic pivot toward becoming an AI-centric organization through the "Catalyst One Platform" initiative. By integrating the disparate infrastructures of LINE and Yahoo! JAPAN into a unified private cloud, the company aims to achieve massive cost efficiencies while accelerating the deployment of AI agents across its entire service ecosystem. This transformation focuses on empowering engineers with AI-driven development tools to foster rapid innovation and deliver a seamless, "WOW" experience for global users. ### Infrastructure Integration and the Catalyst One Platform To address the redundancies following the merger of LINE and Yahoo! JAPAN, LY Corporation is consolidating its technical foundations into a single internal ecosystem known as the Catalyst One Platform. * **Private Cloud Advantage:** The company maintains its own private cloud to achieve a four-fold cost reduction compared to public cloud alternatives, managed by a lean team of 700 people supporting 500,000 servers. * **Unified Architecture:** The integration spans several layers, including Infrastructure (Project "DC-Hub"), Cloud (Project "Flava"), and specialized Data and AI platforms. * **Next-Generation Cloud "Flava":** This platform integrates existing services to enhance VM specifications, VPC networking, and high-performance object storage (Ceph and Dragon). * **Information Security:** A dedicated "SafeOps" framework is being implemented to provide governance and security across all integrated services, ensuring a safer environment for user data. ### AI Strategy and Service Agentization A core pillar of LY’s strategy is the "AI Agentization" of all its services, moving beyond simple features to proactive, personalized assistance. * **Scaling GenAI:** Generative AI has already been integrated into 44 different services within the group. * **Personalized Agents:** The company is developing the capacity to generate millions of specialized agents that can be linked together to support the unique needs of individual users. * **Agent Ecosystem:** The goal is to move from a standard platform model to one where every user interaction is mediated by an intelligent agent. ### AI-Driven Development Transformation Beyond user-facing services, LY is fundamentally changing how its engineers work by deploying internal AI development solutions to all staff starting in July. * **Code and Test Automation:** Proof of Concept (PoC) results showed a 96% accuracy rate for "Code Assist" and a 97% reduction in time for "Auto Test" procedures. * **RAG Integration:** The system utilizes Retrieval-Augmented Generation (RAG) to leverage internal company knowledge and guidelines, ensuring high-quality, context-aware development support. * **Efficiency Gains:** By automating repetitive tasks, the company intends for engineers to shift their focus from maintenance to creative service improvement and innovation. The successful integration of these platforms and the aggressive adoption of AI-driven development tools suggest that LY Corporation is positioning itself to be a leader in the "AI-agent" era. For technical organizations, LY's model serves as a case study in how large-scale mergers can leverage private cloud infrastructure to fund and accelerate a company-wide AI transition.

line

테크 컨퍼런스 Tech-Verse 2025를 개최합니다 (opens in new tab)

LY Corporation is hosting its global technology conference, Tech-Verse 2025, on June 30 and July 1 to showcase the engineering expertise of its international teams. The event features 127 sessions centered on core themes of AI and security, offering a deep dive into how the group's developers, designers, and product managers solve large-scale technical challenges. Interested participants can register for free on the official website to access the online live-streamed sessions, which include real-time interpretation in English, Korean, and Japanese. ### Conference Overview and Access * The event runs for two days, from 10:00 AM to 6:00 PM (KST), and is primarily delivered via online streaming. * Registration is open to the public at no cost through the Tech-Verse 2025 official website. * The conference brings together technical talent from across the LY Corporation Group, including LINE Plus, LINE Taiwan, and LINE Vietnam. ### Multi-Disciplinary Technical Tracks * The agenda is divided into 12 distinct categories to cover the full spectrum of software development and product lifecycle. * Day 1 focuses on foundational technologies: AI, Security, Server-side development, Private Cloud, Infrastructure, and Data Platforms. * Day 2 explores application and management layers: AI Use Cases, Frontend, Mobile Applications, Design, Product Management, and Engineering Management. ### Key Engineering Case Studies and Sessions * **AI and Data Automation:** Sessions explore the evolution of development processes using AI, the shift from "Vibe Coding" to professional AI-assisted engineering, and the use of Generative AI to automate data pipelines. * **Infrastructure and Scaling:** Presentations include how the "Central Dogma Control Plane" connects thousands of services within LY Corporation and methods for improving video playback quality for LINE Call. * **Framework Migration:** A featured case study details the strategic transition of the "Demae-can" service from React Native to Flutter. * **Product Insights:** Deep dives into user experience design and data-driven insights gathered from LINE Talk's global user base. Tech-Verse 2025 provides a valuable opportunity for developers to learn from real-world deployments of AI and large-scale infrastructure. Given the breadth of the 127 sessions and the availability of real-time translation, tech professionals should review the timetable in advance to prioritize tracks relevant to their specific engineering interests.

line

AI와 글쟁이의 동행: 코드 주면 API 레퍼런스 써드려요 (opens in new tab)

LY Corporation is addressing the chronic shortage of high-quality technical documentation by treating the problem as an engineering challenge rather than a training issue. By utilizing Generative AI to automate the creation of API references, the Document Engineering team has transitioned from a "manual craftsmanship" approach to an "industrialized production" model. While the system significantly improves efficiency and maintains internal context better than generic tools, the team concludes that human verification remains essential due to the high stakes of API accuracy. ### Contextual Challenges with Generic AI Standard coding assistants like GitHub Copilot often fail to meet the specific documentation needs of a large organization. * Generic tools do not adhere to internal company style guides or maintain consistent terminology across projects. * Standard AI lacks awareness of internal technical contexts; for example, generic AI might mistake a company-specific identifier like "MID" for "Member ID," whereas the internal tool understands its specific function within the LY ecosystem. * Fragmented deployment processes across different teams make it difficult for developers to find a single source of truth for API documentation. ### Multi-Stage Prompt Engineering To ensure high-quality output without overwhelming the LLM's "memory," the team refined a complex set of instructions into a streamlined three-stage workflow. * **Language Recognition:** The system first identifies the programming language and specific framework being used. * **Contextual Analysis:** It analyzes the API's logic to generate relevant usage examples and supplemental technical information. * **Detail Generation:** Finally, it writes the core API descriptions, parameter definitions, and response value explanations based on the internal style guide. ### Transitioning to Model Context Protocol (MCP) While the prototype began as a VS Code extension, the team shifted to using the Model Context Protocol (MCP) to ensure the tool was accessible across various development environments. * Moving to MCP allows the tool to support multiple IDEs, including IntelliJ, which was a high-priority request from the developer community. * The MCP architecture decouples the user interface from the core logic, allowing the "host" (like the IDE) to handle UI interactions and parameter inputs. * This transition reduced the maintenance burden on the Document Engineering team by removing the need to build and update custom UI components for every IDE. ### Performance and the Accuracy Gap Evaluation of the AI-generated documentation showed strong results, though it highlighted the unique risks of documenting APIs compared to other forms of writing. * Approximately 88% of the AI-generated comments met the team's internal evaluation criteria. * The specialized generator outperformed GitHub Copilot in 78% of cases regarding style and contextual relevance. * The team noted that while a 99% accuracy rate is excellent for a blog post, a single error in a short API reference can render the entire document useless for a developer. To successfully implement AI-driven documentation, organizations should focus on building tools that understand internal business logic while maintaining a strict "human-in-the-loop" workflow. Developers should use these tools to generate the bulk of the content but must perform a final technical audit to ensure the precision that only a human author can currently guarantee.

line

문의 대응을 효율화하기 위한 RAG 기반 봇 도입하기 (opens in new tab)

To address the operational burden of handling repetitive user inquiries for the AWX automation platform, LY Corporation developed a support bot utilizing Retrieval-Augmented Generation (RAG). By combining internal documentation with historical Slack thread data, the system provides automated, context-aware answers that significantly reduce manual SRE intervention. This approach enhances service reliability by ensuring users receive immediate assistance while allowing engineers to focus on high-priority development tasks. ### Technical Infrastructure and Stack * **Slack Integration**: The bot is built using the **Bolt for Python** framework to handle real-time interactions within the company’s communication channels. * **LLM Orchestration**: **LangChain** is used to manage the RAG pipeline; the developers suggest transitioning to LangGraph for teams requiring more complex multi-agent workflows. * **Embedding Model**: The **paraphrase-multilingual-mpnet-base-v2** (SBERT) model was selected to support multi-language inquiries from LY Corporation’s global workforce. * **Vector Database**: **OpenSearch** serves as the vector store, chosen for its availability as an internal PaaS and its efficiency in handling high-dimensional data. * **Large Language Model**: The system utilizes **OpenAI (ChatGPT) Enterprise**, which ensures business data privacy by preventing the model from training on internal inputs. ### Enhancing LLM Accuracy through RAG and Vector Search * **Overcoming LLM Limits**: Traditional LLMs suffer from "hallucinations," lack of up-to-date info, and opaque sourcing; RAG fixes this by providing the model with specific, trusted context during the prompt phase. * **Embedding and Vectorization**: Textual data from wikis and chats are converted into high-dimensional vectors, where semantically similar phrases (e.g., "Buy" and "Purchase") are stored in close proximity. * **k-NN Retrieval**: When a user asks a question, the bot uses **k-Nearest Neighbors (k-NN)** algorithms to retrieve the top *k* most relevant snippets of information from the vector database. * **Contextual Generation**: Rather than relying on its internal training data, the LLM generates a response based specifically on the retrieved snippets, leading to higher accuracy and domain-specific relevance. ### AWX Support Bot Workflow and Data Sources * **Multi-Source Indexing**: The bot references two main data streams: the official internal AWX guide wiki and historical Slack inquiry threads where previous solutions were discussed. * **Automated First Response**: The workflow begins when a user submits a query via a Slack workflow; the bot immediately processes the request and provides an initial AI-generated answer. * **Human-in-the-Loop Validation**: After receiving an answer, users can click "Issue Resolved" to close the ticket or "Call AWX Admin" if the AI's response was insufficient. * **Efficiency Gains**: This tiered approach filters out "RTFM" (Read The F***ing Manual) style questions, ensuring that human administrators only spend time on unique or complex technical issues. Implementing a RAG-based support bot is a highly effective strategy for SRE teams looking to scale their internal support without increasing headcount. For the best results, organizations should focus on maintaining clean internal documentation and selecting embedding models that reflect the linguistic diversity of their specific workforce.