data-governance

2 posts

daangn

won Park": Author. * (opens in new tab)

Daangn’s data governance team addressed the lack of transparency in their data pipelines by building a column-level lineage system using SQL parsing. By analyzing BigQuery query logs with specialized parsing tools, they successfully mapped intricate data dependencies that standard table-level tracking could not capture. This system now enables precise impact analysis and significantly improves data reliability and troubleshooting speed across the organization. **The Necessity of Column-Level Visibility** * Table-level lineage, while easily accessible via BigQuery’s `JOBS` view, fails to identify how specific fields—such as PII or calculated metrics—propagate through downstream systems. * Without granular lineage, the team faced "cascading failures" where a single pipeline error triggered a chain of broken tables that were difficult to trace manually. * Schema migrations, such as modifying a source MySQL column, were historically high-risk because the impact on derivative BigQuery tables and columns was unknown. **Evaluating Extraction Strategies** * BigQuery’s native `INFORMATION_SCHEMA` was found to be insufficient because it does not support column-level detail and often obscures original source tables when Views are involved. * Frameworks like OpenLineage were considered but rejected due to high operational costs; requiring every team to instrument their own Airflow jobs or notebooks was deemed impractical for a central governance team. * The team chose a centralized SQL parsing approach, leveraging the fact that nearly all data transformations within the company are executed as SQL queries within BigQuery. **Technical Implementation and Tech Stack** * **sqlglot:** This library serves as the core engine, parsing SQL strings into Abstract Syntax Trees (AST) to programmatically identify source and destination columns. * **Data Collection:** The system pulls raw query text from `INFORMATION_SCHEMA.JOBS` across all Google Cloud projects to ensure comprehensive coverage. * **Processing and Orchestration:** Spark is utilized to handle the parallel processing of massive query logs, while Airflow schedules regular updates to the lineage data. * **Storage:** The resulting mappings are stored in a centralized BigQuery table (`data_catalog.lineage`), making the dependency map easily accessible for impact analysis and data cataloging. By centralizing lineage extraction through SQL parsing rather than per-job instrumentation, organizations can achieve comprehensive visibility without placing an integration burden on individual developers. This approach is highly effective for BigQuery-centric environments where SQL is the primary language for data movement and transformation.

toss

Toss People: Designing a (opens in new tab)

Data architecture is evolving from a reactive "cleanup" task into a proactive, end-to-end design process that ensures high data quality from the moment of creation. In fast-paced platform environments, the role of a Data Architect is to bridge the gap between rapid product development and reliable data structures, ultimately creating a foundation that both humans and AI can interpret accurately. By shifting from mere post-processing to foundational governance, organizations can maintain technical agility without sacrificing the integrity of their data assets. **From Post-Processing to End-to-End Governance** * Traditional data management often involves "fixing" or "matching puzzles" at the end of the pipeline after a service has already changed, leading to perpetual technical debt. * Effective data architecture requires a culture where data is treated as a primary design object from its inception, rather than a byproduct of application development. * The transition to an end-to-end governance model ensures that data quality is maintained throughout its entire lifecycle—from initial generation in production systems to final analysis and consumption. **Machine-Understandable Data and Ontologies** * Modern data design must move beyond human-readable metadata to structures that AI can autonomously process and understand. * The implementation of semantic-based standard dictionaries and ontologies reduces the need for "inference" or guessing by either humans or machines. * By explicitly defining the relationships and conceptual meanings of columns and tables, organizations create a high-fidelity environment where AI can provide accurate, context-aware responses without interpretive errors. **Balancing Development Speed with Data Quality** * In high-growth environments, insisting on "perfect" design can hinder competitive speed; therefore, architects must find a middle ground that allows for future extensibility. * Practical strategies include designing for current needs while leaving "logical room" for anticipated changes, ensuring that future cleanup is minimally disruptive. * Instead of enforcing rigid rules, architects should design systems where following the standard is the "path of least resistance," making high-quality data entry easier for developers than the alternative. **The Role of the Modern Data Architect** * The role has shifted from a fixed, corporate function to a dynamic problem-solver who uses structural design to solve business bottlenecks. * A successful architect must act as a mediator, convincing stakeholders that investing in a 5% quality improvement (e.g., moving from 90 to 95 points) provides significant long-term ROI in decision-making and AI reliability. * Aspiring architects should focus on incremental structural improvements, as any data professional who cares about how data functions is already operating on the path to data architecture.