naver

Project Automation with AI: Faster and Sm (opens in new tab)

This session from NAVER Engineering Day 2025 explores how developers can transition AI from a simple assistant into a functional project collaborator through local automation. By leveraging local Large Language Models (LLMs) and the Model Context Protocol (MCP), development teams can automate high-friction tasks such as build failure diagnostics and crash log analysis. The presentation demonstrates that integrating these tools directly into the development pipeline significantly reduces the manual overhead required for routine troubleshooting and reporting.

Integrating LLMs with Local Environments

  • Utilizing Ollama allows teams to run LLMs locally, ensuring data privacy and reducing latency compared to cloud-based alternatives.
  • The mcp-agent (Model Context Protocol) serves as the critical bridge, connecting the LLM to local file systems, tools, and project-specific data.
  • This infrastructure enables the AI to act as an "agent" that can autonomously navigate the codebase rather than just processing static text prompts.

Build Failure and Crash Monitoring Automation

  • When a build fails, the AI agent automatically parses the logs to identify the root cause, providing a concise summary instead of requiring a developer to sift through thousands of lines of terminal output.
  • For crash monitoring, the system goes beyond simple summarization by analyzing stack traces and identifying the specific developer or team responsible for the affected code segment.
  • By automating the initial diagnostic phase, the time between an error occurring and a developer beginning the fix is dramatically shortened.

Intelligent Reporting via Slack

  • The system integrates with Slack to deliver automated, context-aware reports that categorize issues by severity and impact.
  • These reports include actionable insights, such as suggested fixes or links to relevant documentation, directly within the communication channel used by the team.
  • This ensures that project stakeholders remain informed of the system's health without requiring manual status updates from engineers.

Considerations for LLM and MCP Implementation

  • While powerful, the combination of LLMs and MCP agents is not a "silver bullet"; it requires careful prompt engineering and boundary setting to prevent hallucination in technical diagnostics.
  • Effective automation depends on the quality of the local context provided to the agent; the more structured the logs and metadata, the more accurate the AI's conclusions.
  • Organizations should evaluate the balance between the computational cost of running local models and the productivity gains achieved through automation.

To successfully implement AI-driven automation, developers should start by targeting specific, repetitive bottlenecks—such as triaging build errors—before expanding the agent's scope to more complex architectural tasks. Focusing on the integration between Ollama and mcp-agent provides a secure, extensible foundation for building a truly "smart" development workflow.