ai-literacy

2 posts

line

We held AI Campus Day to (opens in new tab)

LY Corporation recently hosted "AI Campus Day," a large-scale internal event designed to bridge the gap between AI theory and practical workplace application for over 3,000 employees. By transforming their office into a learning campus, the company successfully fostered a culture of "AI Transformation" through peer-led mentorship and task-specific experimentation. The event demonstrated that internal context and hands-on participation are far more effective than traditional external lectures for driving meaningful AI literacy and productivity gains. ## Hands-on Experience and Technical Support * The curriculum featured 10 specialized sessions across three tracks—Common, Creative, and Engineering—to ensure relevance for every job function. * Sessions ranged from foundational prompt engineering for non-developers to advanced technical topics like building Model Context Protocol (MCP) servers for engineers. * To ensure smooth execution, the organizers provided comprehensive "Session Guides" containing pre-configured account settings and specific prompt templates. * The event utilized a high support ratio, with 26 teaching assistants (TAs) available to troubleshoot technical hurdles in real-time and dedicated Slack channels for sharing live AI outputs. ## Peer-Led Mentorship and Internal Context * Instead of hiring external consultants, the program featured 10 internal "AI Mentors" who shared how they integrated AI into their actual daily workflows at LY Corporation. * Training focused exclusively on company-approved tools, including ChatGPT Enterprise, Gemini, and Claude Code, ensuring all demonstrations complied with internal security protocols. * Internal mentors were able to provide specific "company context" that external lecturers lack, such as integrating AI with existing proprietary systems and data. * A rigorous three-stage quality control process—initial flow review, final end-to-end dry run, and technical rehearsal—was implemented to ensure the educational quality of mentor-led sessions. ## Gamification and Cultural Engagement * The event was framed as a "festival" rather than a mandatory training, using campus-themed motifs like "enrollment" and "school attendance" to reduce psychological barriers. * A "Stamp Rally" system encouraged participation by offering tiered rewards, including welcome kits, refreshments, and subscriptions to premium AI tools. * Interactive exhibition booths allowed employees to experience AI utility firsthand, such as an AI photo zone using Gemini to generate "campus-style" portraits and an AI Agent Contest booth. * Strong executive support played a crucial role, with leadership encouraging staff to pause routine tasks for the day to focus entirely on AI experimentation and "playing" with new technologies. To effectively scale AI literacy within a large organization, it is recommended to move away from passive, one-size-fits-all lectures. Success lies in leveraging internal experts who understand the specific security and operational constraints of the business, and creating a low-pressure environment where employees can experiment with hands-on tasks relevant to their specific roles.

kakao

[AI_TOP_1 (opens in new tab)

The AI TOP 100 contest was designed to shift the focus from evaluating AI model performance to measuring human proficiency in solving real-world problems through AI collaboration. By prioritizing the "problem-solving process" over mere final output, the organizers sought to identify individuals who can define clear goals and navigate the technical limitations of current AI tools. The conclusion of this initiative suggests that true AI literacy is defined by the ability to maintain a "human-in-the-loop" workflow where human intuition guides AI execution and verification. ### Core Philosophy of Human-AI Collaboration * **Human-in-the-Loop:** The contest emphasizes a cycle of human analysis, AI problem-solving, and human verification. This ensures that the human remains the "pilot" who directs the AI engine and takes responsibility for the quality of the result. * **Strategic Intervention:** Participants were encouraged to provide AI with structural context it might struggle to perceive (like complex table relationships) and to perform data pre-processing to improve AI accuracy. * **Task Delegation:** For complex iterative tasks, such as generating images for a montage, solvers were expected to build automated pipelines using AI agents to handle repetitive feedback loops while focusing human effort on higher-level strategy. ### Designing Against "One-Shot" Solutions * **Low Barrier, High Ceiling:** Problems were designed to be intuitive enough for anyone to understand but complex enough to prevent "one-shot" solutions (the "click-and-solve" trap). * **Targeting Technical Weaknesses:** Organizers intentionally embedded technical hurdles that current LLMs struggle with, forcing participants to demonstrate how they bridge the gap between AI limitations and a correct answer. * **The Difficulty Ladder:** To account for varying domain expertise (e.g., OCR experience), problems utilized a multi-part structure. This included "Easy" starting questions to build momentum and "Medium" hint questions that guided participants toward solving the more difficult "Killer" components. ### The 4-Pattern Problem Framework * **P1 - Insight (Analysis & Definition):** Identifying meaningful opportunities or problems within complex, unstructured data. * **P2 - Action (Implementation & Automation):** Developing functional code or workflows to execute a defined solution. * **P3 - Persuasion (Strategy & Creativity):** Generating logical and creative content to communicate technical solutions to non-technical stakeholders. * **P4 - Decision (Optimization):** Making optimal choices and simulations to maximize goals under specific constraints. ### Quality Assurance and Score Calibration * **4-Stage Pipeline:** Problems moved from Ideation to Drafting (testing for one-shot immunity), then to Candidate (analyzing abuse vulnerabilities), and finally to a Final selection based on difficulty balance. * **Cross-Model Validation:** Internal and alpha testers solved problems using various models including Claude, GPT, and Gemini to ensure that no single tool could bypass the intended human-led process. * **Effort-Based Scoring:** Instead of uniform points, scores were calibrated based on the "effort cost" and human competency required to solve them. This resulted in varying total points per problem to better reflect the true difficulty of the task. In the era of rapidly evolving AI, the ability to "use" a tool is becoming less valuable than the ability to "collaborate" with it. This shift requires a move toward building automated pipelines and utilizing a "difficulty ladder" approach to tackle complex, multi-stage problems that AI cannot yet solve in a single iteration.