discord

How to Set Up Your Server’s Roles for Members, Mods & Admins (opens in new tab)

Effective server management hinges on a well-structured permission system that balances user freedom with community safety. By categorizing roles into three distinct tiers—members, moderators, and admins—server owners can create a scalable environment suited for both intimate friend groups and massive public communities. This hierarchical approach ensures that as of June 2025, every user has the access they need without compromising overall server security. **Foundational Permissions for General Members** * Includes the baseline access required for every user to interact, chat, and participate within the community. * Focuses on fostering a welcoming environment where standard members can engage without needing administrative oversight. * Serves as the starting point for all servers, whether they are small private spaces or large public hubs with thousands of users. **Enforcement Capabilities for Community Moderators** * Groups specific permissions designed for users responsible for maintaining order and managing daily interactions. * Provides a middle-tier authority that allows for the management of user behavior without granting full control over server settings. * Allows for customized oversight levels that can scale based on the unique needs and size of the specific community. **Administrative Controls for Server Owners** * Covers "superpowerful" permissions that grant comprehensive technical control over the server’s entire infrastructure. * Reserved for the highest level of trust, enabling the management of roles, channel structures, and global settings. * Reflects the most up-to-date permission set available in the application as of the June 6, 2025, update. To build a sustainable community, server owners should avoid a "one size fits all" approach and instead audit their permissions regularly against these three tiers. Aligning role capabilities with the specific needs and scale of the server is the most effective way to prevent chaos and foster long-term member engagement.

google

MUVERA: Making multi-vector retrieval as fast as single-vector search (opens in new tab)

MUVERA is a state-of-the-art retrieval algorithm that simplifies the computationally intensive process of multi-vector retrieval by converting it into a single-vector Maximum Inner Product Search (MIPS). By transforming complex multi-vector sets into Fixed Dimensional Encodings (FDEs), the system maintains the high accuracy of models like ColBERT while achieving the speed and scalability of traditional search infrastructures. This approach allows for efficient retrieval across massive datasets by leveraging highly optimized geometric search techniques that were previously incompatible with multi-vector similarity measures. ## The Limitations of Multi-Vector Retrieval While traditional models use a single embedding for an entire document, multi-vector models generate an embedding for every token, providing superior semantic depth but creating significant overhead. * Multi-vector representations lead to a massive increase in embedding volume, requiring more storage and processing power. * Similarity is typically calculated using "Chamfer matching," a non-linear operation that measures the maximum similarity between query tokens and document tokens. * Because Chamfer similarity is more complex than a standard dot-product, it cannot directly use sublinear search algorithms, often necessitating expensive exhaustive comparisons. ## Fixed Dimensional Encodings (FDEs) The core innovation of MUVERA is the reduction of multi-vector sets into a single, manageable vector representation that preserves mathematical relationships. * FDEs are single vectors designed so that their inner product closely approximates the original multi-vector Chamfer similarity. * The transformation process is "data-oblivious," meaning the mapping does not need to be trained on or adjusted for specific datasets or changes in data distribution. * By squeezing group information into a fixed-length format, MUVERA allows complex data points to be stored and queried using existing single-vector indexing structures. ## The MUVERA Retrieval Pipeline The algorithm functions as a multi-stage process that prioritizes both speed and precision through a retrieve-and-rerank architecture. * **FDE Generation:** Query and document multi-vector sets are mapped into FDEs to capture essential similarity information. * **MIPS-based Retrieval:** A standard MIPS solver indexes the document FDEs and rapidly identifies a set of likely candidates for a given query. * **Re-ranking:** The initial candidates are refined using the original, exact Chamfer similarity score to ensure the highest possible accuracy in the final results. MUVERA provides a practical framework for scaling high-accuracy multi-vector models to massive datasets without the traditional latency penalties. Its ability to bridge the gap between complex semantic modeling and optimized search infrastructure makes it a versatile tool for modern information retrieval systems.

line

Hosting the Tech Conference Tech- (opens in new tab)

LY Corporation is hosting its global technology conference, Tech-Verse 2025, on June 30 and July 1 to showcase the engineering expertise of its international teams. The event features 127 sessions centered on core themes of AI and security, offering a deep dive into how the group's developers, designers, and product managers solve large-scale technical challenges. Interested participants can register for free on the official website to access the online live-streamed sessions, which include real-time interpretation in English, Korean, and Japanese. ### Conference Overview and Access * The event runs for two days, from 10:00 AM to 6:00 PM (KST), and is primarily delivered via online streaming. * Registration is open to the public at no cost through the Tech-Verse 2025 official website. * The conference brings together technical talent from across the LY Corporation Group, including LINE Plus, LINE Taiwan, and LINE Vietnam. ### Multi-Disciplinary Technical Tracks * The agenda is divided into 12 distinct categories to cover the full spectrum of software development and product lifecycle. * Day 1 focuses on foundational technologies: AI, Security, Server-side development, Private Cloud, Infrastructure, and Data Platforms. * Day 2 explores application and management layers: AI Use Cases, Frontend, Mobile Applications, Design, Product Management, and Engineering Management. ### Key Engineering Case Studies and Sessions * **AI and Data Automation:** Sessions explore the evolution of development processes using AI, the shift from "Vibe Coding" to professional AI-assisted engineering, and the use of Generative AI to automate data pipelines. * **Infrastructure and Scaling:** Presentations include how the "Central Dogma Control Plane" connects thousands of services within LY Corporation and methods for improving video playback quality for LINE Call. * **Framework Migration:** A featured case study details the strategic transition of the "Demae-can" service from React Native to Flutter. * **Product Insights:** Deep dives into user experience design and data-driven insights gathered from LINE Talk's global user base. Tech-Verse 2025 provides a valuable opportunity for developers to learn from real-world deployments of AI and large-scale infrastructure. Given the breadth of the 127 sessions and the availability of real-time translation, tech professionals should review the timetable in advance to prioritize tracks relevant to their specific engineering interests.

google

From research to climate resilience (opens in new tab)

Google Research is leveraging advanced artificial intelligence to transform climate science from theoretical exploration into scalable, real-world resilience tools. By developing sophisticated models for floods, cyclones, and hyper-local weather, the initiative provides critical lead times that empower communities to protect lives and livelihoods against increasingly frequent environmental threats. This transition from "impossible" research to global implementation highlights AI's capacity to bridge data gaps in the world's most vulnerable regions. ## AI-Powered Global Flood Forecasting * Google developed a global hydrological AI model, recently published in *Nature*, which enables riverine flood forecasts up to seven days in advance. * The system utilizes "virtual gauges" to analyze historical data and provide predictions in regions where physical water-monitoring infrastructure is non-existent. * The Flood Hub platform now covers over 100 countries and 700 million people, providing an expert data layer and API access for local governments and researchers. ## Cyclone Tracking and Intensity Prediction * Collaborative research between Google DeepMind and Google Research has produced models that predict storm existence, track, intensity, and size up to 15 days in advance. * The AI generates up to 50 different possible scenarios for each storm, providing a more nuanced view of potential impact than traditional physics-based supercomputer simulations. * Through the new Weather Lab website, these experimental models are being shared with the US National Hurricane Center to assist in forecasting during the Atlantic hurricane season. ## Global Nowcasting with MetNet-3 * The MetNet-3 state-of-the-art neural weather model provides hyper-local precipitation forecasts with a 5km resolution, updated every 15 minutes. * By utilizing satellite observations instead of traditional ground-based radar, the system delivers reliable weather data to regions like Africa that lack extensive physical infrastructure. * These 12-hour "nowcasting" windows are integrated directly into Google Search, specifically helping agricultural communities react to changing conditions to improve crop yields and reduce waste. These advancements demonstrate that the "art of the possible" is rapidly expanding, offering a future where data-scarce regions can access the same life-saving predictive capabilities as developed nations through global partnerships and satellite-based modeling.

google

A colorful quantum future (opens in new tab)

Google Quantum AI researchers have successfully implemented "color codes" for quantum error correction on the superconducting Willow chip, presenting a more efficient alternative to the standard surface code. This approach utilizes a unique triangular geometry to reduce the number of physical qubits required for a logical qubit while dramatically increasing the speed of logical operations. The results demonstrate that the system has crossed the performance threshold where increasing the code distance successfully suppresses logical error rates. ## Resource Efficiency through Triangular Geometry * Unlike the square-shaped surface code, the color code uses a hexagonal tiling arranged in a triangular patch to encode logical information. * This geometric configuration requires significantly fewer physical qubits to achieve the same "distance" (the number of physical errors needed to cause a logical error) compared to surface codes. * Experimental results comparing distance-3 and distance-5 color codes showed a 1.56× suppression in logical error rates at the higher distance, confirming the code's viability on current hardware. * While the color code requires more complex decoding algorithms and deeper physical circuits, recent advances in decoders like AlphaQubit have enabled the system to operate below the error correction threshold. ## Accelerating Logical Gates * Color codes allow for many single-qubit logical operations to be executed in a single step (transversal gates), whereas surface codes often require multiple error-correction cycles. * A logical Hadamard gate, for instance, can be executed in approximately 20ns using a color code, which is nearly 1,000 times faster than the same operation on a surface code. * Faster execution reduces the number of error-correction cycles an algorithm must endure, which indirectly lowers the physical qubit requirements for maintaining logical stability. * The research team verified these improvements through "logical randomized benchmarking," confirming high-fidelity execution of logical operations. ## Logical State Injection and Magic States * The researchers demonstrated a "state injection" technique, which is the process of preparing a physical qubit in a specific state and then expanding it into a protected logical state. * This process is essential for creating "magic states" (T-states), which are necessary for performing the arbitrary qubit rotations required for complex quantum algorithms. * By moving states from the physical to the logical level, the color code architecture provides a clear path toward executing the universal gate sets needed to outperform classical computers. While the color code currently exhibits a lower error suppression factor than the surface code, its advantages in hardware efficiency and gate speed suggest it may be the superior architecture for large-scale, fault-tolerant quantum computing as device hardware continues to improve.

google

Unlocking rich genetic insights through multimodal AI with M-REGLE (opens in new tab)

Google Research has introduced M-REGLE, a multimodal AI framework designed to analyze diverse health data streams simultaneously to uncover the genetic underpinnings of complex diseases. By jointly modeling complementary signals—such as electrocardiograms (ECG) and photoplethysmograms (PPG)—the method captures shared biological information and reduces noise more effectively than unimodal approaches. This integrated analysis significantly enhances the discovery of genetic associations and improves the prediction of cardiovascular conditions like atrial fibrillation. ## Technical Architecture and Workflow M-REGLE utilizes a multi-step process to transform raw physiological waveforms into actionable genetic insights: * **Multimodal Integration:** Instead of processing data types in isolation, the model combines multiple inputs, such as the 12 leads of an ECG or paired ECG and PPG data, to capture overlapping signals. * **Latent Representation Learning:** The system employs a convolutional variational autoencoder (CVAE) to compress these high-dimensional waveforms into a low-dimensional "signature" or latent factors. * **Statistical Refinement:** Principal component analysis (PCA) is applied to the CVAE-generated signatures to ensure the learned factors are independent and uncorrelated. * **Genetic Mapping:** These independent factors are analyzed via genome-wide association studies (GWAS) to identify significant correlations between physiological signatures and specific genetic variations. ## Improved Data Reconstruction and Genetic Sensitivity The transition from unimodal (U-REGLE) to multimodal modeling has led to substantial gains in both data accuracy and biological discovery: * **Error Reduction:** M-REGLE achieved a 72.5% reduction in reconstruction error for 12-lead ECGs compared to analyzing each lead separately, indicating a much higher fidelity in capturing essential waveform characteristics. * **Increased Discovery Power:** In a study involving over 40,000 participants from the UK Biobank, the multimodal approach identified 3,251 significant genetic loci associated with 12-lead ECGs, a notable increase over the 2,215 loci found by unimodal methods. * **Novel Findings:** The model identified specific genetic links, such as the *RBM20* locus, which were previously missed by standard clinical measurements but are known to be critical for heart muscle function. ## Interpretability and Disease Prediction Beyond identifying associations, M-REGLE offers generative capabilities that help clinicians understand the relationship between latent data and physical health: * **Waveform Synthesis:** By altering specific coordinates within the learned embeddings, researchers can observe how individual latent factors correspond to physical changes in a patient's ECG T-wave or PPG peaks. * **Clinical Utility:** The model identified specific embeddings (positions 4, 6, and 10) that distinguish patients with atrial fibrillation (AFib) from those without. * **Predictive Performance:** M-REGLE’s embeddings outperformed traditional clinical polygenic risk scores (PRS) in predicting AFib, demonstrating the value of incorporating raw waveform data into risk assessments. ## Practical Applications Researchers and clinicians can leverage M-REGLE to extract richer insights from existing biobank data and wearable device outputs. By integrating multiple modalities into a single analytical pipeline, the framework provides a more comprehensive view of organ system health, facilitating the identification of therapeutic targets and more accurate disease screening protocols.

line

Replacing a Payment System Database That Processes (opens in new tab)

The LINE Billing Platform team recently migrated its core payment database from Nbase-T to Vitess to address rising licensing costs while maintaining the high availability required for financial transactions. After a rigorous Proof of Concept (PoC) evaluating Apache ShardingSphere, TiDB, and Vitess, the team selected Vitess for its mature sharding capabilities and its ability to provide a stable, scalable environment on bare-metal infrastructure. This migration ensures the platform can handle large-scale traffic efficiently without the financial burden of proprietary license fees. ### Evaluation of Alternative Sharding Solutions Before settling on Vitess, the team analyzed other prominent distributed database technologies to determine their fit for a high-stakes payment system: * **Apache ShardingSphere:** While it offers flexible Proxy and JDBC layers, it was excluded because it requires significant manual effort for data resharding and rebalancing. The management overhead for implementing shard-key logic across various components (API, batch, admin) was deemed too high. * **TiDB:** This MySQL-compatible distributed database uses a decoupled architecture consisting of TiDB (SQL layer), PD (metadata management), and TiKV (row-based storage). Its primary advantage is automatic rebalancing and the lack of a required shard key, which significantly reduces DBA operational costs. * **Nbase-T:** The legacy system provided the highest performance efficiency per resource unit; however, the shift from a free to a paid licensing model necessitated the move to an open-source alternative. ### Vitess Architecture and Core Components Vitess was chosen for its proven track record at companies like YouTube and GitHub, offering a robust abstraction layer that makes a clustered database appear as a single instance to the application. The system relies on several specialized components: * **VTGate:** A proxy server that routes queries to the correct VTTablet, manages distributed transactions, and hides the physical topology of the database from the application. * **VTTablet:** A sidecar process running alongside each MySQL instance that manages query execution, data replication, and connection pooling. * **VTorc and Topology Server:** High availability is managed by VTorc (an automated failover tool), while metadata regarding shard locations and node status is synchronized via a topology server using ZooKeeper or etcd. ### PoC Performance and Environment Setup The team conducted performance testing by simulating real payment API scenarios (a mix of reads and writes) on standardized hardware (8vCPU, 16GB RAM). * **Comparison Metrics:** The tests focused on Transactions Per Second (TPS) and resource utilization as thread counts increased. * **Infrastructure Strategy:** Because payment systems cannot tolerate even brief failover delays, the team opted for a bare-metal deployment rather than a containerized one to ensure maximum stability and performance. * **Resource Efficiency:** While Nbase-T showed the best raw efficiency, Vitess demonstrated the necessary scalability and management features required to replace the legacy system effectively within the new cost constraints. ### Practical Recommendation For organizations managing critical core systems that require horizontal scaling without proprietary lock-in, Vitess is a highly recommended solution. While it requires a deep understanding of its various components (like VTGate and VTTablet) and careful configuration of its topology server, the trade-off is a mature, cloud-native-ready architecture that supports massive scale and automated failover on both bare-metal and cloud environments.

discord

Gift Nitro and Earn A Flavorful Splash for your Avatar (opens in new tab)

Discord has launched a limited-time "Freshly Picked" promotion to encourage community gifting during the summer gaming season. Users who purchase a monthly or annual Nitro gift for a friend by June 23 will receive a permanent exclusive avatar decoration for their own profile. This initiative aims to enhance the platform experience for both the gifter and the recipient through premium feature access and seasonal cosmetic rewards. ### Earning the "Freshly Picked" Decoration * To qualify for the reward, users must purchase either a monthly or annual Nitro subscription as a gift. * The transaction must be processed specifically through the Discord desktop application to trigger the promotion. * The promotional window is currently active and is scheduled to conclude on June 23. * Once earned, the "Freshly Picked" avatar decoration—which features a summer-themed beverage animation—is added to the user's collection permanently. ### Gift Management and Recipient Benefits * Gifts purchased during this period do not need to be redeemed immediately; they can be stored in the user's "Gift Inventory" for later distribution. * Recipients of the Nitro gift gain full access to premium platform features, including the ability to use custom emojis across any server. * The gifted membership also enables higher-quality streaming capabilities, improving the visual fidelity of shared gameplay sessions. ### Support and Documentation * Discord has updated its Help Center with a dedicated section to address specific questions regarding the promotion's terms. * The desktop app remains the primary interface for tracking gift status and managing the inventory of earned decorations. To take advantage of this offer, ensure you complete your gift purchase through the desktop client before the June 23 deadline. This allows you to secure the permanent cosmetic reward while providing a friend with enhanced streaming and customization features for the summer.