discord

How to Use the Discord Soundboard & Add More Sounds (opens in new tab)

Discord’s Soundboard feature enables users to play immediate audio clips during voice calls to react to live events or social cues. By providing a collection of pre-set and customizable soundbites, the tool adds a layer of expressive, real-time engagement to the platform’s communication suite. This integrated system streamlines the use of audio reactions, removing the need for external software to trigger sound effects during conversations. ### Soundboard Functionality and Usage * Trigger specific audio reactions like airhorns or audience cheers instantly during voice calls. * Utilize contextual sounds, such as crickets, to fill silences or react to specific in-game moments. * Access the feature through a dedicated interface within the call window for rapid selection. ### Customization and Audio Control * Expand the library by uploading custom sound files to personalize the collection for specific servers or friend groups. * Manage the playback experience using dedicated volume controls to ensure sounds are audible without being disruptive. * Navigate settings to find where to add, remove, and organize sounds for easier access during high-energy moments. To enhance your group interactions, explore adding custom sounds that reflect your community's inside jokes, but be sure to use the individual volume sliders to maintain a comfortable balance for all participants in the call.

discord

Checkpoint 3: Leveling Up Discord Quests with Orbs and Advanced Measurement (opens in new tab)

Discord is scaling its Quests advertising platform by introducing a new virtual currency and expanding its measurement capabilities through a strategic analytics partnership. These updates aim to deepen user engagement while providing brand partners with more granular data on the return on investment for their campaigns. **Introduction of Discord Orbs** * Discord Orbs serve as a new virtual reward that users can earn by participating in sponsored Quests. * The currency is redeemable in the Discord Shop for a variety of digital goods, including Nitro credits and profile cosmetics. * Some Shop items will be designated as Orbs exclusives to drive participation within the Quests ecosystem. * The feature is currently rolling out to a select group of users to test integration before a broader release. **Enhanced Advertising Analytics with Kantar** * Discord has established a new partnership with Kantar to bolster the measurement framework of its advertising products. * This collaboration provides advertisers with advanced analytics tools to better track campaign performance and ROI. * The partnership is designed to validate the effectiveness of Quests as an advertising medium by offering third-party performance insights. These updates represent a strategic shift for Discord, transforming Quests from a simple engagement tool into a robust advertising product that rewards user participation with tangible platform value. Brands looking to reach gaming audiences should monitor the rollout of Orbs as a potential benchmark for gamified digital advertising.

discord

Thank You for Ten Years (opens in new tab)

Discord is celebrating its tenth anniversary, marking a decade of evolution from a niche gaming communication tool into a global social platform for 200 million monthly active users. The milestone report highlights how the platform has shifted the social media paradigm away from algorithmic feeds toward intimate, "digital living room" environments. Ultimately, the data shows that integrated voice and video features are the primary drivers for long-form engagement, significantly increasing both session duration and user retention. ## Gaming Ecosystem and Engagement Metrics * Discord’s reach has expanded to 200 million monthly active users, with over 90% of the user base having played a PC, console, or mobile game within the last 30 days. * The platform supports a massive variety of content, with users engaging in more than 8,000 unique titles per month on PC alone. * Total monthly gaming time on the platform exceeds 2 billion hours, highlighting its role as a central hub for the global gaming community. * Technical integration of voice chat acts as a force multiplier for engagement; users stay in gaming sessions three times longer when connected via Discord voice. ## Social Dynamics and Multimedia Co-consumption * Social influence drives discovery and play, as 28% of users launch a specific game within one hour of watching a friend stream it via the platform. * The presence of a social circle dramatically impacts performance and endurance, with gameplay sessions lasting seven times longer when users play with friends. * The platform has successfully transitioned into a general-purpose hangout space; after gaming ends, 66% of users remain to watch videos, 59% listen to music, and 49% watch movies or shows together. * 92% of users utilize voice channels simultaneously while gaming, indicating that the platform functions as a secondary layer to the primary gaming experience. ## The Architecture of Small-Scale Socializing * Discord has redefined digital interaction by prioritizing "micro-communities" over mass broadcasting, with 90% of all activity occurring in small, intimate servers. * Communication remains focused and personal, evidenced by the fact that the average voice call consists of only four participants. * Users are increasingly tribal but focused, typically rotating their time between three different friend-based servers per month. * This structure replaces traditional social media "doomscrolling" with active participation, mimicking the feeling of physical presence through low-latency voice and video communication. As Discord enters its second decade, its trajectory suggests that the future of social tech lies in facilitating high-quality, small-group interactions rather than massive public feeds. For developers and creators, the takeaway is clear: community stickiness is best achieved by building tools that allow users to seamlessly transition between active tasks, like gaming, and passive co-consumption of media.

discord

Go Beyond, Plus Ultra! with the My Hero Academia Collection (opens in new tab)

Discord has officially launched its first anime-themed collection in collaboration with Crunchyroll, featuring the popular series *My Hero Academia*. Released in anticipation of the 2025 Anime Awards, the collection introduces eleven new customization items that leverage character-specific "Quirks" and iconic gear. This partnership represents a direct response to high user demand for anime-centric profile aesthetics and immersive digital collectibles. ### Hero Gear and Avatar Decorations * The creative team focused on "Hero Gear" as the primary design element to ensure decorations remain instantly recognizable while avoiding excessive obstruction of the user’s avatar. * The production process followed a three-step workflow: conceptualizing the gear, applying color to establish mood and richness, and adding custom animations to bring the characters' unique Quirks to life. * The collection features eight distinct decorations, including Izuku Midoriya, Katsuki Bakugo, Ochaco Uraraka, Shoto Todoroki, Endeavor, Hawks, All Might, and Tomura Shigaraki. ### Dynamic Profile Storytelling * Designers utilized the larger surface area of profile effects to move beyond simple gear, focusing instead on "signature moves" and iconic moments from the anime. * The effects are designed for immediate impact, aiming to tell a story in seconds through high-energy animations like Deku’s electrifying "Full Cowling" and Bakugo’s "Cluster" explosions. * Three specific profile effects were created for this launch: Full Cowling, Cluster, and a dedicated League of Villains theme. Fans can now access the *My Hero Academia* collection through the Shop on both desktop and mobile platforms to personalize their digital identity with these limited-edition hero and villain aesthetics.

line

Complex user authentication processes are easy (opens in new tab)

Designing a robust membership authentication system is a critical early-stage requirement that prevents long-term technical debt and protects a platform’s integrity. By analyzing the renewal of the Demaecan delivery service, it is evident that choosing the right authentication mechanism depends heavily on regional infrastructure and a balance between security costs and user friction. Ultimately, a well-structured authentication flow can simultaneously reduce fraud rates and significantly lower user drop-off during registration. ### The Consequences of Weak Authentication Neglecting authentication design during the initial stages of a project often leads to "ghost members" and operational hurdles that are difficult to rectify later. * **Data Integrity Issues:** Without verification, databases fill with unreachable or fake contact information, such as invalid phone numbers. * **Onboarding Blockers:** Legitimate new users may be prevented from signing up if their recycled phone numbers are already linked to unverified legacy accounts. * **Marketing Abuse:** A lack of unique identifiers makes it impossible to prevent bad actors from creating multiple accounts to exploit promotional coupons or events. ### Regional Differences in Verification Authentication strategies must be tailored to the specific digital infrastructure of the target market, as "identity verification" varies globally. * **Domestic (Korea) Standards:** Highly integrated systems allow for "Identity Verification," which combines possession (OTP) and real-name data through telecommunications companies or banking systems. * **Global and Japanese Standards:** Most regions lack a centralized government-linked identity system, relying instead on "Possession Authentication" via email or SMS, or simple two-factor authentication (2FA). * **Verification Expiration:** High-security services must define clear validity periods for authentication data and determine how long to retain data after a user withdraws to prevent immediate re-abuse. ### Strategic Fraud Prevention via IVR When SMS-based possession authentication becomes insufficient to stop determined abusers, shifting the economic cost for the fraudster is an effective solution. * **SMS vs. Voice (IVR):** In Japan, acquiring phone numbers capable of receiving voice calls is more expensive than acquiring SMS-only numbers. * **IVR Implementation:** By switching to call-based (Inbound Voice Response) authentication, Demaecan increased the barrier to entry for abusers. * **Impact:** This strategic shift in authentication type reduced the fraudulent user rate from over 20% to just 1.5%. ### Optimizing Sign-up UX and Retention A complex authentication process does not have to result in high churn if the UI flow is logically organized and user-friendly. * **Logical Grouping:** Grouping similar tasks—such as placing phone and email verification sequentially—helps users understand the progression of the sign-up flow. * **Streamlined Data Entry:** Integrating social login buttons early in the process allows for email auto-fill, reducing the number of manual input fields for the user. * **Safety Nets:** Implementing simple "back" buttons for correcting typos during email verification and adding warning dialogs when a user tries to close the window significantly reduces accidental exits. * **Performance Metrics:** These UX improvements led to a 30% decrease in user attrition, proving that structured flows can mitigate the friction of multi-step verification. To build a successful authentication system, planners should prioritize the most cost-effective verification method for their specific market and focus on grouping steps logically to maintain a smooth user experience. Monitoring conversion logs is essential to identify and fix specific points in the flow where users might struggle.

line

Code Quality Improvement Techniques Part (opens in new tab)

The "Clone Family" anti-pattern occurs when two parallel inheritance hierarchies—such as a data model tree and a provider tree—share an implicit relationship that is not enforced by the type system. This structure often leads to type-safety issues and requires risky downcasting to access specific data types, increasing the likelihood of runtime errors during code modifications. To resolve this, developers should replace rigid inheritance with composition or utilize parametric polymorphism to explicitly link related types. ## The Risks of Implicit Correspondence Maintaining two separate inheritance trees where individual subclasses are meant to correspond to one another creates several technical hurdles. * **Downcasting Requirements:** Because a base provider typically returns a base data model type, developers must manually cast the result to a specific subclass (e.g., `as FooDataModel`), which bypasses compiler safety. * **Lack of Type Enforcement:** The constraint that a specific provider always returns a specific model is purely implicit; the compiler cannot prevent a provider from returning the wrong model type. * **Fragile Architecture:** As the system grows, ensuring that "Provider A" always maps to "Model A" becomes difficult to audit, leading to potential bugs when new developers join the project or when the hierarchy is extended. ## Substituting Inheritance with Composition When the primary goal of inheritance is simply to share common logic, such as fetching raw data, using composition or aggregation is often a superior alternative. * **Logic Extraction:** Shared functionality can be moved into a standalone class, such as an `OriginalDataProvider`, which is then held as a private property within specific provider classes. * **Direct Type Returns:** By removing the shared parent class, each provider can explicitly return its specific data model type without needing a common interface. * **Decoupling:** This approach eliminates the "Clone Family" entirely by removing the need for parallel trees, resulting in cleaner and more modular code. ## Leveraging Parametric Polymorphism In scenarios where a common parent class is necessary—for example, to manage a collection of providers within a shared lifecycle—generics can be used to bridge the two hierarchies safely. * **Generic Type Parameters:** By defining the parent as `ParentProvider<T>`, the base class can use a type parameter for its return values rather than a generic base model. * **Subclass Specification:** Each implementation (e.g., `FooProvider : ParentProvider<FooDataModel>`) explicitly defines its return type, allowing the compiler to enforce the relationship. * **Flexible Constraints:** Developers can still utilize type bounds, such as `ParentProvider<T : CommonDataModel>`, to ensure that the generics adhere to a specific interface while maintaining type safety for callers. When designing data providers and models, avoid creating parallel structures that rely on implicit assumptions. Prioritize composition to simplify the architecture, or use generics if inheritance is required, ensuring that the relationships between classes remain explicit and verifiable by the compiler.

line

Implementing a RAG-based (opens in new tab)

To address the operational burden of handling repetitive user inquiries for the AWX automation platform, LY Corporation developed a support bot utilizing Retrieval-Augmented Generation (RAG). By combining internal documentation with historical Slack thread data, the system provides automated, context-aware answers that significantly reduce manual SRE intervention. This approach enhances service reliability by ensuring users receive immediate assistance while allowing engineers to focus on high-priority development tasks. ### Technical Infrastructure and Stack * **Slack Integration**: The bot is built using the **Bolt for Python** framework to handle real-time interactions within the company’s communication channels. * **LLM Orchestration**: **LangChain** is used to manage the RAG pipeline; the developers suggest transitioning to LangGraph for teams requiring more complex multi-agent workflows. * **Embedding Model**: The **paraphrase-multilingual-mpnet-base-v2** (SBERT) model was selected to support multi-language inquiries from LY Corporation’s global workforce. * **Vector Database**: **OpenSearch** serves as the vector store, chosen for its availability as an internal PaaS and its efficiency in handling high-dimensional data. * **Large Language Model**: The system utilizes **OpenAI (ChatGPT) Enterprise**, which ensures business data privacy by preventing the model from training on internal inputs. ### Enhancing LLM Accuracy through RAG and Vector Search * **Overcoming LLM Limits**: Traditional LLMs suffer from "hallucinations," lack of up-to-date info, and opaque sourcing; RAG fixes this by providing the model with specific, trusted context during the prompt phase. * **Embedding and Vectorization**: Textual data from wikis and chats are converted into high-dimensional vectors, where semantically similar phrases (e.g., "Buy" and "Purchase") are stored in close proximity. * **k-NN Retrieval**: When a user asks a question, the bot uses **k-Nearest Neighbors (k-NN)** algorithms to retrieve the top *k* most relevant snippets of information from the vector database. * **Contextual Generation**: Rather than relying on its internal training data, the LLM generates a response based specifically on the retrieved snippets, leading to higher accuracy and domain-specific relevance. ### AWX Support Bot Workflow and Data Sources * **Multi-Source Indexing**: The bot references two main data streams: the official internal AWX guide wiki and historical Slack inquiry threads where previous solutions were discussed. * **Automated First Response**: The workflow begins when a user submits a query via a Slack workflow; the bot immediately processes the request and provides an initial AI-generated answer. * **Human-in-the-Loop Validation**: After receiving an answer, users can click "Issue Resolved" to close the ticket or "Call AWX Admin" if the AI's response was insufficient. * **Efficiency Gains**: This tiered approach filters out "RTFM" (Read The F***ing Manual) style questions, ensuring that human administrators only spend time on unique or complex technical issues. Implementing a RAG-based support bot is a highly effective strategy for SRE teams looking to scale their internal support without increasing headcount. For the best results, organizations should focus on maintaining clean internal documentation and selecting embedding models that reflect the linguistic diversity of their specific workforce.

google

Fine-tuning LLMs with user-level differential privacy (opens in new tab)

Researchers from Google investigated scaling user-level differential privacy (DP) to the fine-tuning of large language models in datacenter environments. While traditional example-level DP protects individual data points, user-level DP provides a stronger guarantee by masking the presence of an entire user's dataset, which is critical for privacy-sensitive, domain-specific tasks. The study explores how the flexibility of datacenter training can be used to optimize sampling strategies and contribution bounds to minimize the noise typically required for these stringent privacy guarantees. ## Limitations of Example-Level Privacy * Standard differential privacy focuses on "example-level" protection, which prevents attackers from learning about specific individual data points. * In many real-world scenarios, a single user contributes many examples to a dataset; if an attacker can analyze these multiple points together, they may still learn private information about the user even under example-level DP. * User-level DP addresses this by ensuring a model remains essentially the same whether or not a specific user’s entire data collection was used during training. * While more robust, user-level DP is "strictly harder" to implement because it requires injecting significantly more noise into the training process, a problem that scales with the size of the model. ## Methodologies for User-Level DP Fine-Tuning * Both primary algorithms require a "contribution bound" during pre-processing, which strictly limits the number of examples any single user can provide to the training set. * Example-Level Sampling (ELS) involves sampling random individual examples for a batch and then applying a modified version of DP-SGD with high noise to compensate for the potential presence of multiple examples from the same user. * User-Level Sampling (ULS) involves sampling random users and including all of their (bounded) examples in a batch, which more closely resembles the structure of federated learning. * The datacenter environment offers a unique advantage over federated learning because researchers can perform precise queries on both individual examples and whole users, allowing for better optimization of the noise-to-utility ratio. ## Optimization and Datacenter Flexibility * The researchers focused on fine-tuning rather than full training because DP requires additional computation that is often unaffordable for base model training. * A central challenge in this research is determining the optimal "contribution bound"—if the bound is too low, valuable data is discarded, but if it is too high, more noise must be added to maintain privacy. * Because the datacenter allows for random sampling of any user at any time (unlike federated learning where devices must be online), the ULS algorithm can be tuned more effectively to achieve quality gains in the final model. To maximize the utility of LLMs fine-tuned on private data, developers should prioritize User-Level Sampling (ULS) strategies and carefully calibrate the contribution bounds of their datasets. By leveraging the controlled environment of a datacenter to optimize these parameters, it is possible to achieve high-performance models that respect user privacy more effectively than traditional example-level methods.

line

Code Quality Improvement Techniques Part (opens in new tab)

The "Set Discount" technique improves code quality by grouping related mutable properties into a single state object rather than allowing them to be updated individually. By restricting state changes through a controlled interface, developers can prevent inconsistent configurations and simplify the lifecycle management of complex classes. This approach ensures that dependent values are updated atomically, significantly reducing bugs caused by race conditions or stale data. ### The Risks of Fragmented Mutability When a class exposes multiple independent mutable properties, such as `isActive`, `minImportanceToRecord`, and `dataCountPerSampling`, it creates several maintenance challenges: * **Order Dependency:** Developers might accidentally set `isActive` to true before updating the configuration properties, causing the system to briefly run with stale or incorrect settings. * **Inconsistent Logic:** Internal state resets (like clearing a counter) may be tied to one property but forgotten when another related property changes, leading to unpredictable behavior. * **Concurrency Issues:** Even in single-threaded environments, asynchronous updates to individual properties can create race conditions that are difficult to debug. ### Consolidating State with SamplingPolicy To resolve these issues, the post recommends refactoring individual properties into a dedicated configuration class and using a single reference to manage the state: * **Atomic Updates:** By wrapping configuration values into a `SamplingPolicy` class, the system ensures that the minimum importance level and sampling interval are always updated together. * **Representing "Inactive" with Nulls:** Instead of a separate boolean flag, the `policy` property can be made nullable. An `inactive` state is naturally represented by `null`, making it impossible to "activate" the recorder without providing a valid policy. * **Explicit Lifecycle Methods:** Replacing property setters with methods like `startRecording()` and `finishRecording()` forces a clear transition of state and ensures that counters are reset consistently every time a new session begins. ### Advantages of Restricting State Transitions Moving from individual property mutation to a consolidated interface offers several technical benefits: * **Guaranteed Consistency:** It eliminates the possibility of "half-configured" states because the policy is replaced as a whole. * **Simplified Thread Safety:** If the class needs to be thread-safe, developers only need to synchronize a single reference update rather than coordinating multiple volatile variables. * **Improved Readability:** The intent of the code becomes clearer to future maintainers because the valid combinations of state are explicitly defined by the API. When designing components where properties are interdependent or must change simultaneously, you should avoid providing public setters for every field. Instead, provide a focused interface that limits updates to valid combinations, ensuring the object remains in a predictable state throughout its lifecycle.

google

Google Research at Google I/O 2025 (opens in new tab)

Google Research at I/O 2025 showcases the "research to reality" transition, highlighting how years of foundational breakthroughs are now being integrated into Gemini models and specialized products. By focusing on multimodal capabilities, pedagogy, and extreme model efficiency, Google aims to democratize access to advanced AI while ensuring it remains grounded and useful across global contexts. ## Specialized Healthcare Models: MedGemma and AMIE * **MedGemma:** This new open model, based on Gemma 3, is optimized for multimodal medical tasks such as radiology image analysis and clinical data summarization. It is available in 4B and 27B sizes, performing similarly to much larger models on the MedQA benchmark while remaining small enough for efficient local fine-tuning. * **AMIE (Articulate Medical Intelligence Explorer):** A research AI agent designed for diagnostic medical reasoning. Its latest multimodal version can now interpret and reason about visual medical information, such as skin lesions or medical imaging, to assist clinicians in diagnostic accuracy. ## Educational Optimization through LearnLM * **Gemini 2.5 Pro Integration:** The LearnLM family of models, developed with educational experts, is now integrated into Gemini 2.5 Pro. This fine-tuning enhances STEM reasoning, multimodal understanding, and pedagogical feedback. * **Interactive Learning Tools:** A new research-optimized quiz experience allows students to generate custom assessments from their own notes, providing specific feedback on right and wrong answers rather than just providing solutions. * **Global Assessment Pilots:** Through partnerships like the one with Kayma, Google is testing the automatic assessment of short and long-form content in regions like Ghana to scale quality educational tools. ## Multilingual Expansion and On-Device Gemma Models * **Gemma 3 and 3n:** Research breakthroughs have expanded Gemma 3’s support to over 140 languages. The introduction of **Gemma 3n** targets extreme efficiency, capable of running on devices with as little as 2GB of RAM while maintaining low latency and low energy consumption. * **ECLeKTic Benchmark:** To assist the developer community, Google introduced this novel benchmark specifically for evaluating how well large language models transfer knowledge across different languages. ## Model Efficiency and Factuality in Search * **Inference Techniques:** Google Research continues to set industry standards for model speed and accessibility through technical innovations like **speculative decoding** and **cascades**, which reduce the computational cost of generating high-quality responses. * **Grounded Outputs:** Significant focus remains on factual consistency, ensuring that the AI models powering features like AI Overviews in Search provide reliable and grounded information to users. As Google continues to shrink the gap between laboratory breakthroughs and consumer products, the emphasis remains on making high-performance AI accessible on low-cost hardware and across diverse linguistic landscapes. Developers and researchers can now leverage these specialized tools via platforms like HuggingFace and Vertex AI to build more targeted, efficient applications.

line

How to evaluate AI-generated images? (opens in new tab)

To optimize the Background Person Removal (BPR) feature in image editing services, the LY Corporation AMD team evaluated various generative AI inpainting models to determine which automated metrics best align with human judgment. While traditional research benchmarks often fail to reflect performance in high-resolution, real-world scenarios, this study identifies a framework for selecting models that produce the most natural results. The research highlights that as the complexity and size of the masked area increase, the gap between model performance becomes more pronounced, requiring more sophisticated evaluation strategies. ### Background Person Removal Workflow * **Instance Segmentation:** The process begins by identifying individual pixels to classify objects such as people, buildings, or trees within the input image. * **Salient Object Detection:** This step distinguishes the main subjects of the photo from background elements to ensure only unwanted figures are targeted for removal. * **Inpainting Execution:** Once the background figures are removed, inpainting technology is used to reconstruct the empty space so it blends seamlessly with the surrounding environment. ### Comparison of Inpainting Technologies * **Diffusion-based Models:** These models, such as FLUX.1-Fill-dev, restore damaged areas by gradually removing noise. While they excel at restoring complex details, they are generally slower than GANs and can occasionally generate artifacts. * **GAN-based Models:** Using a generator-discriminator architecture, models like LaMa and HINT offer faster generation speeds and competitive performance for lower-resolution or smaller inpainting tasks. * **Performance Discrepancy:** Experiments showed that while most models perform well on small areas, high-resolution images with large missing sections reveal significant quality differences that are not always captured in standard academic benchmarks. ### Evaluation Methodology and Metrics * **BPR Evaluation Dataset:** The team curated a specific dataset of 10 images with high quality-variance to test 11 different inpainting models released between 2022 and 2024. * **Single Image Quality Metrics:** Evaluated models using LAION Aesthetics score-v2, CLIP-IQA, and Q-Align to measure the aesthetic quality of individual generated frames. * **Preference and Reward Models:** Utilized PickScore, ImageReward, and HPS v2 to determine which generated images would be most preferred by human users. * **Objective:** The goal of these tests was to find an automated evaluation method that minimizes the need for expensive and time-consuming human reviews while maintaining high reliability. Selecting an inpainting model based solely on paper-presented metrics is insufficient for production-level services. For features like BPR, it is critical to implement an evaluation pipeline that combines both aesthetic scoring and human preference models to ensure consistent quality across diverse, high-resolution user photos.

discord

Staff Picks, May 2025: The Games That Brought Us to Discord (opens in new tab)

To celebrate its anniversary, Discord is hosting a retrospective featuring team members Christina, Emi, Jeremy, and Armando. The post reflects on the platform's growth and its deep-rooted history in the gaming community by examining the specific titles that first brought these individuals to the service. Through these personal stories, the platform highlights its evolution from a new communication tool into a central hub for long-term gaming communities. ### Community Retrospectives and Origins * The anniversary serves as a milestone to look back at the platform's evolution and the expanding "backlog" of games that have defined the user experience over the years. * Staff members recount the specific gaming experiences and social needs that served as their primary motivation for joining Discord during its early years. * The narrative emphasizes the platform's longevity and its role in facilitating social connections centered around shared digital hobbies. ### Current Gaming Trends and Recommendations * Beyond looking back at the past, the contributors highlight their current gaming habits and the titles currently occupying their time. * Specific mentions include the upcoming mystery-puzzle game *Blue Prince*, illustrating the diverse range of genres supported by the Discord community. * The post provides readers with new game recommendations to help celebrate the anniversary, bridging the gap between nostalgic origins and modern playstyles. As Discord marks another year, the focus remains on the intersection of communication and play. Users looking to participate in the celebration can do so by engaging with the team's curated recommendations or reflecting on the specific titles that first integrated them into the Discord ecosystem.

discord

Introducing the Discord for Business Newsletter, Vol. 1 (opens in new tab)

Discord has introduced a dedicated newsletter designed to keep partners and business associates informed about the platform's latest developments. The initiative serves as a strategic resource for teams to identify emerging business opportunities and maintain a close connection with Discord’s evolving ecosystem. **Newsletter Objectives and Business Value** * Provides a specialized communication channel tailored specifically for Discord’s professional partner network and friends. * Aggregates the latest technical updates and platform changes to help external teams stay ahead of industry shifts. * Focuses on highlighting specific opportunities that can help businesses grow and scale within the Discord environment. **Direct Engagement and Subscription** * Offers a direct-to-inbox delivery method to ensure stakeholders receive updates without needing to monitor external feeds. * Encourages immediate sign-up for teams wanting to maintain a competitive edge through consistent information flow. Stakeholders and developers should subscribe to this newsletter to ensure they remain aligned with Discord’s product roadmap and can pivot quickly based on new partnership opportunities.