JavaScript (JS) is an object-oriented programming language that allows engineers to produce and implement complex features within web browsers. JavaScript is popular because of its versatility and is preferred as the primary choice unless a specific function is needed. In this Zone, we provide resources that cover popular JS frameworks, server applications, supported data types, and other useful topics for a front-end engineer.
Initializing Services in Node.js Application
Checking TLS and SSL Versions of Applications in JavaScript, Python, and Other Programming Languages
Series Introduction Staying ahead of the curve in JavaScript development requires embracing the ever-evolving landscape of tools and technologies. As we navigate through 2024, the landscape of JavaScript development tools will continue to transform, offering more refined, efficient, and user-friendly options. This "JS Toolbox 2024" series is your one-stop shop for a comprehensive overview of the latest and most impactful tools in the JavaScript ecosystem. Across the series, we'll delve into various categories of tools, including runtime environments, package managers, frameworks, static site generators, bundlers, and test frameworks. It will empower you to wield these tools effectively by providing a deep dive into their functionalities, strengths, weaknesses, and how they fit into the modern JavaScript development process. Whether you're a seasoned developer or just starting, this series will equip you with the knowledge needed when it comes to selecting the right tools for your projects in 2024. The series consists of 3 parts: Runtime Environments and Package Management (this article): In this first installment, we explore the intricacies of runtime environments, focusing on Node and Bun. You'll gain insights into their histories, performance metrics, community support, and ease of use, supported by relevant case studies.The segment on package management tools compares npm, yarn, and pnpm, highlighting their performance and security features. We provide tips for choosing the most suitable package manager for your project. Frameworks and Static Site Generators: This post provides a thorough comparison of popular frameworks like React, Vue, Angular, Svelte, and HTMX, focusing on their unique features and suitability for different project types.The exploration of static site generators covers Astro, Nuxt/Next, Hugo, Gatsby, and Jekyll, offering detailed insights into their usability, performance, and community support, along with success stories from real-world applications. Bundlers and Test Frameworks: We delve into the world of bundlers, comparing webpack, build, vite, and parcel 2. This section aims to guide developers through the nuances of each bundler, focusing on their performance, compatibility, and ease of use.The test frameworks section provides an in-depth look at MochaJS, Jest, Jasmine, Puppeteer, Selenium, and Playwright. It includes a comparative analysis emphasizing ease of use, community support, and overall robustness, supplemented with case studies demonstrating their effectiveness in real-world scenarios. Part 1: Runtime Environments and Package Management JavaScript is bigger than ever, and the ecosystem is nothing short of overwhelming. In this JS toolbox 2024 series, we’ve selected and analyzed the most noteworthy JS tools, so that you don’t have to. Just as any durable structure needs a solid foundation, successful JavaScript projects rely heavily on starting with the right tools. This post, the first in our JS Toolbox 2024 series, explores the core pillars of the JavaScript and TypeScript ecosystem: Runtime environments, package management, and development servers. In this post: Runtime environments Node.js Deno Bun 2. Comparing JS runtimes Installation Performance, stability, and security Community 3. Package managers NPM Yarn pnpM Bun 4. What to choose Runtime Environments In JavaScript development, runtimes are the engines that drive advanced, server-centric projects beyond the limitations of a user's browser. This independence is pivotal in modern web development, allowing for more sophisticated and versatile applications. The JavaScript runtime market is more dynamic than ever, with several contenders competing for the top spot. Node.js, the long-established leader in this space, now faces formidable competition from Deno and Bun. Deno is the brainchild of Ryan Dahl, the original creator of Node.js. It represents a significant step forward in runtime technology, emphasizing security through fine-grained access controls and modern capabilities like native TypeScript support. Bun has burst onto the scene, releasing version 1.0 in September 2023. Bun sets itself apart with exceptional speed, challenging the performance standards established by its predecessors. Bun's rapid execution capabilities, enabled by just-in-time (JIT) execution, make it a powerful alternative in the runtime environment space. An overview of framework popularity trends The popularity of Node.js has continued to grow over 2023, and I anticipate this will continue into 2024. There has been a slight downtrend in the growth trajectory, which I’d guess is due to the other tooling growing in market share. Deno has seen substantial growth over 2023. If the current trend continues I anticipate Deno to overtake Node.js in popularity in 2024, though it’s worth mentioning that star-based popularity doesn’t reflect usage in the field. Without a doubt, Node.js will retain its position as the lead environment for production systems throughout 2024. Bun has seen the largest growth in this category over the past year. I anticipate that Bun will find a steady foothold and continue its ascent, following the release of version 1.0. It’s early days for this new player, but comparing early-stage growth to others in the category, it’s shaping up to be a high performer. Node.js Node.js, acclaimed as the leading web technology by StackOverflow developers, has been a significant player in the web development world since its inception in 2009. It revolutionized web development by enabling JavaScript for server-side scripting, thus allowing for the creation of complex, backend-driven applications. Advantages Asynchronous and event-driven: Node.js operates on an asynchronous, event-driven architecture, making it efficient for scalable network applications. This model allows Node.js to handle multiple operations concurrently without blocking the main thread. Rich ecosystem: With a diverse and extensive range of tools, resources, and libraries available, Node.js offers developers an incredibly rich ecosystem, supporting a wide array of development needs. Optimized for performance: Node.js is known for its low-latency handling of HTTP requests, which is optimal for web frameworks. It efficiently utilizes system resources, allowing for load balancing and the use of multiple cores through child processes and its cluster module. Disadvantages Learning curve for asynchronous programming: The non-blocking, asynchronous nature of Node.js can be challenging for developers accustomed to linear programming paradigms, leading to a steep learning curve. Callback hell: While manageable, Node.js can lead to complex nested callbacks – often referred to as "callback hell" – which can make code difficult to read and maintain. However, this can be mitigated with modern features like async/await. Deno Deno represents a step forward in JavaScript and TypeScript runtimes, leveraging Google’s V8 engine and built-in Rust for enhanced security and performance. Conceived by Ryan Dahl, the original creator of Node.js, Deno is positioned as a more secure and modern alternative, addressing some of the core issues found in Node.js, particularly around security. Advantages Enhanced security: Deno's secure-by-default approach requires explicit permissions for file, network, and environment access, reducing the risks associated with an all-access runtime. Native TypeScript support: It offers first-class support for TypeScript and TSX, allowing developers to use TypeScript out of the box without additional transpiling steps. Single executable compilation: Deno can compile entire applications into a single, self-contained executable, simplifying deployment and distribution processes. Disadvantages Young ecosystem: Being relatively new compared to Node.js, Deno’s ecosystem is still growing, which may temporarily limit the availability of third-party modules and tools. Adoption barrier: For teams and projects deeply integrated with Node.js, transitioning to Deno can represent a significant change, posing challenges in terms of adoption and migration. Bun Bun emerges as a promising new contender in the JavaScript runtime space, positioning itself as a faster and more efficient alternative to Node.js. Developed using Zig and powered by JavaScriptCore, Bun is designed to deliver significantly quicker startup times and lower memory usage, making it an attractive option for modern web development. Currently, Bun provides a limited, experimental native build for Windows with full support for Linux and macOS. Hopefully, early in 2024, we see full support for Windows released. Advantages High performance: Bun's main draw is its performance, offering faster execution and lower resource usage compared to traditional runtimes, making it particularly suitable for high-efficiency requirements. Integrated development tools: It comes with an integrated suite of tools, including a test runner, script runner, and a Node.js-compatible package manager, all optimized for speed and compatibility with Node.js projects. Evolving ecosystem: Bun is continuously evolving, with a focus on enhancing Node.js compatibility and broadening its integration with various frameworks, signaling its potential as a versatile and adaptable solution for diverse development needs. Disadvantages Relative newness in the market: As a newer player, Bun's ecosystem is not as mature as Node.js, which might pose limitations in terms of available libraries and community support. Compatibility challenges: While efforts are being made to improve compatibility with Node.js, there may still be challenges and growing pains in integrating Bun into existing Node.js-based projects or workflows. Comparing JavaScript Runtimes Installation Each JavaScript runtime has its unique installation process. Here's a brief overview of how to install Node.js, Deno, and Bun: Node.js Download: Visit the Node.js website and download the installer suitable for your operating system. Run installer: Execute the downloaded file and follow the installation prompts. This process will install both Node.js and npm. Verify installation: Open a terminal or command prompt and type node -v and npm -v to check the installed versions of Node.js and npm, respectively. Managing different versions of Node.js has historically been a challenge for developers. To address this issue, tools like NVM (Node Version Manager) and NVM Windows have been developed, greatly simplifying the process of installing and switching between various Node.js versions. Deno Shell Command: You can install Deno using a simple shell command.• Windows: irm https://deno.land/install.ps1 | iex• Linux/macOS: curl -fsSL https://deno.land/x/install/install.sh | sh Alternative methods: Other methods like downloading a binary from the Deno releases page are also available. Verify installation: To ensure Deno is installed correctly, type deno --version in your terminal. Bun Shell Command: Similar to Deno, Bun can be installed using a shell command. For instance, on macOS, Linux, and WSL use the command curl https://bun.sh/install | bash. Alternative methods: For detailed instructions or alternative methods, check the Bun installation guide. Verify installation: After installation, run bun --version in your terminal to verify that Bun is correctly installed. Performance, Stability, and Security In evaluating JavaScript runtimes, performance, stability, and security are the key factors to consider. Mayank Choubey's benchmark studies provide insightful comparisons among Node.js, Deno, and Bun: Node.js vs Deno vs Bun: Express hello world server benchmarking Node.js vs Deno vs Bun: Native HTTP hello world server benchmarking I’d recommend giving the post a read if you’re interested in the specifics. Otherwise, I’ll do my best to summarize the results below. Node.js Historically, Node.js has been known for its efficient handling of asynchronous operations and has set a standard in server-side JavaScript performance. In the benchmark, Node.js displayed solid performance, reflective of its maturity and optimization over the years. However, it didn't lead the pack in terms of raw speed. As Node.js has been around for a long time and has proven its reliability, it wins the category of stability. Deno Deno, being a relatively newer runtime, has shown promising improvements in performance, particularly in the context of security and TypeScript support. The benchmark results for Deno were competitive, showcasing its capability to handle server requests efficiently, though it still trails slightly behind in raw processing speed compared to Bun. Given its emphasis on security features like explicit permissions for file, network, and environment access, Deno excels in the category of security. Bun Bun made a significant impression with its performance in this benchmark. It leverages Zig and JavaScriptCore, which contributes to its faster startup times and lower memory usage. In the "Hello World" server test, Bun outperformed both Node.js and Deno in terms of request handling speed, showcasing its potential as a high-performance JavaScript runtime. With its significant speed improvements, Bun leads in the category of performance. These results suggest that while Node.js remains a reliable and robust choice for many applications, Deno and Bun are catching up, offering competitive and sometimes superior performance metrics. Bun, in particular, demonstrates remarkable speed, which could be a game-changer for performance-critical applications. However, it's important to consider other factors such as stability, community support, and feature completeness when choosing a runtime for your project. Community The community surrounding a JavaScript runtime is vital for its growth and evolution. It shapes development, provides support, and drives innovation. Let's briefly examine the community dynamics for Node.js, Deno, and Bun: Node.js: Node.js has one of the largest, most diverse communities in software development, enriched by a wide array of libraries, tools, and resources. Its community actively contributes to its core and modules, bolstered by global events and forums for learning and networking. Deno: Deno's community is rapidly growing, drawing developers with its modern and security-centric features. It's characterized by active involvement in the runtime’s development and a strong online presence, particularly on platforms like GitHub and Discord. Bun: Although newer, Bun’s community is dynamic and quickly expanding. Early adopters are actively engaged in its development and performance enhancement, with lively discussions and feedback exchanges on online platforms. Each of these communities, from Node.js’s well-established network to the emerging groups around Deno and Bun, plays a crucial role in the adoption and development of these runtimes. For developers, understanding the nuances of these communities can be key to leveraging the full potential of a chosen runtime. Package Managers If you’ve ever worked on the front end of a modern web application or if you're a full-stack node engineer, you’ve likely used a package manager at some point. The package manager is responsible for managing the dependencies of your project, such as libraries, frameworks, and utilities. NPM is the default package manager that comes pre-installed with Node.js. Yarn and PNPM compete to take NPM's spot as the package management tool of choice for developers working in the JavaScript ecosystem. An overview of framework popularity trends NPM Node Package Manager or NPM for short, is the default and most dominant package manager for JavaScript projects. It comes pre-installed with Node.js, providing developers with immediate access to the npm registry, allowing them to install, share, and manage package dependencies right from the start of their project. It was created in 2009 by Isaac Schlueter as a way to share and reuse code for Node.js projects. Since then, it has grown to become a huge repository of packages that can be used for both front-end and back-end development. NPM consists of two main components: NPM CLI (Command Line Interface): This tool is used by developers to install, update, and manage packages (libraries or modules) in their JavaScript projects. It interacts with npm’s online repository, allowing developers to add external packages to their projects easily. NPM registry: An extensive online database of public and private JavaScript packages, the npm Registry is where developers can publish their packages, making them accessible to the wider JavaScript community. It's known for its vast collection of libraries, frameworks, and tools, contributing to the versatility and functionality of JavaScript projects. This star graph doesn’t capture much in terms of the overall popularity of NPM CLI given that this tool comes pre-installed with Node.js. Knowing this, it’s worth also reviewing the overall download count of these packages. NPM currently has 56,205,118,637 weekly downloads Woah, 56.2B! It’s safe to say NPM isn’t going anywhere. From the graphs, we can see a steady incline in the overall popularity of this tool through 2023. I predict this growth will continue through 2024. Yarn Yarn is a well-established open-source package manager created in 2016 by Facebook, Google, Exponent, and Tilde. It was designed to address some of the issues and limitations of NPM, such as speed, correctness, security, and developer experience. To improve these areas, Yarn incorporates a range of innovative features. These include workspaces for managing multiple packages within a single repository, offline caching for faster installs, parallel installations for improved speed, a hardened mode for enhanced security, and interactive commands for a more intuitive user interface. These features collectively contribute to Yarn’s robustness and efficiency. It features a command-line interface that closely resembles NPM's but with several enhancements and differences. It utilizes the same package.json file as NPM for defining project dependencies. Additionally, Yarn introduces the yarn.lock file, which precisely locks down the versions of dependencies, ensuring consistent installs across environments. Like NPM, Yarn also creates a node_modules folder where it installs and organizes the packages for your project. Yarn currently has 4,396,069 weekly downloads Given that Yarn and pnpM require manual installs this does mean the download counts are un-comparable with NPM but it still gives us a glance at the overall trends. In 2023, Yarn appears to have lost some of its growth trajectory but still remains the most popular alternative to NPM for package management. pnpM Performant NPM or pnpM for short, is another alternative package manager for JavaScript that was created in 2016 by Zoltan Kochan. It was designed to be faster, lighter, and more secure than both NPM and Yarn. It excels in saving disk space and speeding up the installation process. Unlike npm, where each project stores separate copies of dependencies, pnpm stores them in a content-addressable store. This approach means if multiple projects use the same dependency, they share a single stored copy, significantly reducing disk usage. When updating dependencies, pnpm only adds changed files instead of duplicating the entire package. The installation process in pnpM is streamlined into three stages: resolving dependencies, calculating the directory structure, and linking dependencies, making it faster than traditional methods. pnpM also creates a unique node_modules directory using symlinks for direct dependencies only, avoiding unnecessary access to indirect dependencies. This approach ensures a cleaner dependency structure, while still offering a traditional flat structure option through its node-linker setting for those who prefer it. pnpM currently has 8,016,757 weekly downloads pnpM's popularity surged in 2023, and I foresee this upward trend extending into 2024, as an increasing number of developers recognize its resource efficiency and streamlined project setup. Bun As Bun comes with an npm-compatible package manager, I felt it was worth mentioning here. I've covered Bun in the "Runtime Environments" section above. What To Choose Choosing the right tool for your project in 2024 depends on a variety of factors including your project's specific requirements, your team's familiarity with the technology, and the particular strengths of each tool. In the dynamic world of JavaScript development, having a clear understanding of these factors is crucial for making an informed decision. For those prioritizing stability and a proven track record, Node.js remains a top recommendation. It's well-established, supported by a vast ecosystem, and continues to be a reliable choice for a wide range of applications. Node.js's maturity makes it a safe bet, especially for projects where long-term viability and extensive community support are essential. On the other hand, if you're inclined towards experimenting with the latest advancements in the field and are operating in a Linux-based environment, Bun presents an exciting opportunity. It stands out for its impressive performance and is ideal for those looking to leverage the bleeding edge of JavaScript runtime technology. Bun’s rapid execution capabilities make it a compelling option for performance-driven projects. When it comes to package management, pnpM is an excellent choice. Its efficient handling of dependencies and disk space makes it ideal for developers managing multiple projects or large dependencies. With its growing popularity and focus on performance, pnpM is well-suited for modern JavaScript development. JavaScript tools in 2024 offer a massive range catered to different needs and preferences. Whether you opt for the stability of Node.js, the cutting-edge performance of Bun, or the efficient dependency management of pnpM, each tool brings unique strengths to the table. Carefully consider your project’s requirements and team’s expertise to make the best choice for your development journey in 2024. Like you, I’m always curious and looking to learn. If I've overlooked a noteworthy tool or if you have any feedback to share, reach out on LinkedIn.
Building a REST API to communicate with an RDS database is a fundamental task for many developers, enabling applications to interact with a database over the internet. This article guides you through the process of creating a RESTful API that talks to an Amazon Relational Database Service (RDS) instance, complete with examples. We'll use a popular framework and programming language for this demonstration: Node.js and Express, given their widespread use and support for building web services. Prerequisites Before we begin, ensure you have the following: An AWS account and an RDS instance set up: For this example, let's assume we're using a MySQL database, but the approach is similar for other database engines supported by RDS. Node.js and npm (Node Package Manager) installed on your development machine Basic knowledge of JavaScript and SQL Step 1: Setting Up Your Project First, create a new directory for your project and initialize a new Node.js application: PowerShell mkdir my-api cd my-api npm init -y Install Express and the MySQL database connector: PowerShell npm install express mysql Step 2: Creating the Database Connection Create a new file named database.js in your project directory. This file will set up the connection to your RDS database. Replace the placeholders with your actual RDS instance details: JavaScript const mysql = require('mysql'); const pool = mysql.createPool({ connectionLimit: 10, host: '<RDS_HOST>', user: '<RDS_USERNAME>', password: '<RDS_PASSWORD>', database: '<RDS_DATABASE>' }); module.exports = pool; Using a connection pool is recommended for managing multiple concurrent database connections efficiently. Step 3: Building the REST API Create a new file named app.js. This file will define your API endpoints and how they interact with the RDS database. JavaScript const express = require('express'); const pool = require('./database'); const app = express(); const PORT = process.env.PORT || 3000; app.use(express.json()); // Endpoint to get all items app.get('/items', (req, res) => { pool.query('SELECT * FROM items', (error, results) => { if (error) throw error; res.status(200).json(results); }); }); // Endpoint to add a new item app.post('/items', (req, res) => { const { name, description } = req.body; pool.query('INSERT INTO items (name, description) VALUES (?, ?)', [name, description], (error, results) => { if (error) throw error; res.status(201).send(`Item added with ID: ${results.insertId}`); }); }); // Start the server app.listen(PORT, () => { console.log(`Server is running on port ${PORT}`); }); In this example, we've created two endpoints: one to retrieve all items from the items table and another to add a new item to the table. Ensure you have an items table in your RDS database with at least name and description columns. Step 4: Running Your API To start your API, run the following command in your project directory: PowerShell node app.js Your API is now running and can interact with your RDS database. You can test the endpoints using tools like Postman or cURL. Testing the API To test retrieving items from the database, use: PowerShell curl http://localhost:3000/items To test adding a new item: PowerShell curl -X POST http://localhost:3000/items -H "Content-Type: application/json" -d '{"name": "NewItem", "description": "This is a new item."}' Conclusion You've now set up a basic REST API that communicates with an AWS RDS database. This setup is scalable and can be expanded with more complex queries, additional endpoints, and more sophisticated database operations. Remember to secure your API and database connection, especially when deploying your application to production. With these foundations, you're well on your way to integrating AWS RDS databases into your web applications effectively.
In modern application development, delivering personalized and controlled user experiences is paramount. This necessitates the ability to toggle features dynamically, enabling developers to adapt their applications in response to changing user needs and preferences. Feature flags, also known as feature toggles, have emerged as a critical tool in achieving this flexibility. These flags empower developers to activate or deactivate specific functionalities based on various criteria such as user access, geographic location, or user behavior. React, a popular JavaScript framework known for its component-based architecture, is widely adopted in building user interfaces. Given its modular nature, React applications are particularly well-suited for integrating feature flags seamlessly. In this guide, we'll explore how to integrate feature flags into your React applications using IBM App Configuration, a robust platform designed to manage application features and configurations. By leveraging feature flags and IBM App Configuration, developers can unlock enhanced flexibility and control in their development process, ultimately delivering tailored user experiences with ease. IBM App Configuration can be integrated with any framework be it React, Angular, Java, Go, etc. React is a popular JavaScript framework that uses a component-based architecture, allowing developers to build reusable and modular UI components. This makes it easier to manage complex user interfaces by breaking them down into smaller, self-contained units. Adding feature flags to React components will make it easier for us to handle the components. Integrating With IBM App Configuration IBM App Configuration provides a comprehensive platform for managing feature flags, environments, collections, segments, and more. Before delving into the tutorial, it's important to understand why integrating your React application with IBM App Configuration is necessary and what benefits it offers. By integrating with IBM App Configuration, developers gain the ability to dynamically toggle features on and off within their applications. This capability is crucial for modern application development, as it allows developers to deliver controlled and personalized user experiences. With feature flags, developers can activate or deactivate specific functionalities based on factors such as user access, geographic location, or user preferences. This not only enhances user experiences but also provides developers with greater flexibility and control over feature deployments. Additionally, IBM App Configuration offers segments for targeted rollouts, enabling developers to gradually release features to specific groups of users. Overall, integrating with IBM App Configuration empowers developers to adapt their applications' behavior in real time, improving agility, and enhancing user satisfaction. To begin integrating your React application with App Configuration, follow these steps: 1. Create an Instance Start by creating an instance of IBM App Configuration on cloud.ibm.com. Within the instance, create an environment, such as Dev, to manage your configurations. Now create a collection. Creating collections comes in handy when there are multiple feature flags created for various projects. Each project can have a collection in the same App Configuration instance and you can tag these feature flags to the collection to which they belong. 2. Generate Credentials Access the service credentials section and generate new credentials. These credentials will be required to authenticate your React application with App Configuration. 3. Install SDK In your React application, install the IBM App Configuration React SDK using npm: CSS npm i ibm-appconfiguration-react-client-sdk 4. Configure Provider In your index.js or App.js, wrap your application component with AppConfigProvider to enable AppConfig within your React app. The Provider must be wrapped at the main level of the application, to ensure the entire application has access. The AppConfigProvider requires various parameters as shown in the screenshot below. All of these values can be found in the credentials created. 5. Access Feature Flags Now, within your App Configuration instance, create feature flags to control specific functionalities. Copy the feature flag ID for further integration into your code. Integrating Feature Flags Into React Components Once you've set up the AppConfig in your React application, you can seamlessly integrate feature flags into your components. Enable Components Dynamically Use the feature flag ID copied from the App Configuration instance to toggle specific components based on the flag's status. This allows you to enable or disable features dynamically without redeploying your application. Utilizing Segments for Targeted Rollouts IBM App Configuration offers segments to target specific groups of users, enabling personalized experiences and controlled rollouts. Here's how to leverage segments effectively: Define Segments Create segments based on user properties, behaviors, or other criteria to target specific user groups. Rollout Percentage Adjust the rollout percentage to control the percentage of users who receive the feature within a targeted segment. This enables gradual rollouts or A/B testing scenarios. Example If the rollout percentage is set to 100% and a particular segment is targeted, then the feature is rolled out to all the users in that particular segment. If the rollout percentage is set between 1% to 99% and the rollout percentage is 60%, for example, and a particular segment is targeted, then the feature is rolled out to randomly 60% of the users in that particular segment. If the rollout percentage is set to 0% and a particular segment is targeted, then the feature is rolled out to none of the users in that particular segment. Conclusion Integrating feature flags with IBM App Configuration empowers React developers to implement dynamic feature toggling and targeted rollouts seamlessly. By leveraging feature flags and segments, developers can deliver personalized user experiences while maintaining control over feature deployments. Start integrating feature flags into your React applications today to unlock enhanced flexibility and control in your development process.
NodeJS is a leading software development technology with a wide range of frameworks. These frameworks come with features, templates, and libraries that help developers overcome setbacks and build applications faster with fewer resources. This article takes an in-depth look at NodeJS frameworks in 2024. Read on to discover what they are, their features, and their application. What Is NodeJS? NodeJS is an open-source server environment that runs on various platforms, including Windows, Linux, Unix, Mac OS X, and more. It is free, written in JS, and built on Chrome’s V8 JavaScript engine. Here’s how NodeJS is described on its official website: “NodeJS is a platform built on Chrome’s JavaScript runtime for easily building fast and scalable network applications. As an asynchronous event-driven JavaScript runtime, NodeJS is designed to build scalable network applications… Users of NodeJS are free from worries of dead-locking the process since there are no locks. Almost no function in NodeJS directly performs I/O, so the process never blocks except when the I/O is performed using synchronous methods of the NodeJS standard library. Because nothing blocks, scalable systems are very reasonable to develop in NodeJS.” Ryan Dahl developed this cross-platform runtime tool for building server-side and networking programs. NodeJS makes development easy and fast by offering a wide collection of JS modules, enabling developers to create web applications with higher accuracy and less stress. General Features of NodeJS NodeJS has some distinctive characteristics: Single-Threaded NodeJS utilizes a single-threaded yet scalable style coupled with an event loop model. One of the biggest draws of this setup is that it’s capable of processing multiple requests. With event looping, NodeJS can perform non-blocking input-output operations. Highly Scalable Applications developed with NodeJS are highly scalable because the platform operates asynchronously. It works on a single thread, which enables the system to handle multiple requests simultaneously. Once each response is ready, it is forwarded back to the client. No Buffering NodeJS applications cut down the entire time required for processing by outputting data in blocks with the help of the callback function. They do not buffer any data. Open Source This simply means that the platform is free to use and open to contributions from well-meaning developers. Performance Since NodeJS is built on Google Chrome’s V8 JavaScript engine, it facilitates faster execution of code. Leveraging asynchronous programming and non-blocking concepts, it can offer high-speed performance. The V8 JS engine makes code execution and implementation easier, faster, and more efficient by compiling JavaScript code into machine format. Caching The platform also stands out in its caching ability. It caches modules and makes retrieving web pages faster and easier. With caching, there is no need for the re-execution of codes after the first request. The module can readily be retrieved seamlessly from the application’s memory. License The platform is available under the MIT license. What Are the Top NodeJS Frameworks for the Backend? Frameworks for NodeJS help software architects to develop applications efficiently and with ease. Here are the best NodeJS backend frameworks: 1. Express.js Express.js is an open-source NodeJS module with around 18 million downloads per week, present in more than 20k stacks, and used by over 1,733 companies worldwide. This is a flexible top NodeJS framework with cutting-edge features, enabling developers to build robust single, multi-page, and hybrid web applications. With Express.js, the development of Node-based applications is fast and easy. It is a minimal framework that has many capabilities accessible through plugins. The original developer of Expres.js is TJ Holowaychukand. It was first released on the 22nd of May, 2010. It is widely known and used by leading corporations like Fox Sports, PayPal, Uber, IBM, Twitter, Stack, Accenture, and so on. Key Features of Express.js Here are the features of Express.js: Faster server-side development Great performance: It offers a thin layer of robust application development features without tampering with NodeJS' capabilities. Many tools are based on Express.js Dynamic rendering of HTML pages Enables setting up of middlewares to respond to HTTP requests Very high test coverage Efficient routing Content negotiation Executable for generating applications swiftly Debugging: The framework makes debugging very easy by offering a debugging feature capable of showing developers where the bugs are When To Use Express.js Due to the high-end features outlined above (detailed routing, configuration, security features, and debugging mechanisms), this NodeJS framework is ideal for any enterprise-level or web-based app. That said, it is advisable to do a thorough NodeJS framework comparison before making a choice. 2. Next.js Next.js is an open-source, minimalistic framework for server-rendered React applications. The tool has about 1.8 million downloads, is present in more than 2.7k stacks, and is used by over 800 organizations. Developers leverage the full-stack framework to build highly interactive platforms with SEO-friendly features. Version 12 of the tool was released in October of last year, and this latest version promises to offer the best value. This top NodeJS framework enables React-based web application capabilities like server-side rendering and static page generation. It offers an amazing development experience with great features you need for production, ranging from smart bundling and TypeScript support to server rendering and so on. In addition, no configuration is needed. It makes building fast and user-friendly static websites and web applications easy using React. With Automatic Static Optimization, Next.js builds hybrid applications that feature both statically generated and server-rendered pages. Features of Next.js Here are the key features of Next.js: Great page-based routing API Hybrid pages Automatic code splitting Image optimization Built-in CSS and SaaS support Fully extendable Detailed documentation Faster development Client-side routing with prefetching When To Use Next.js If you are experienced in React, you can leverage Next.js to build a high-demanding app or web app shop. The framework comes with a range of modern web technologies you can use to develop robust, fast, and highly interactive applications. 3. Koa Koa is an open-source backend tech stack with about 1 million downloads per week, present in more than 400 stacks, and used by up to 90 companies. The framework is going for a big jump with version 2. It was built by the same set of developers that built Express. Still, they created it with the purpose of providing something smaller that is more expressive and can offer a stronger foundation for web applications and APIs. This framework stands out because it uses async functions, enabling you to eliminate callbacks and improve bug handling. Koa leverages various tools and methods to make coding web applications and APIs easy and fun. The framework does not bundle any middleware. The tool is similar to other popular middleware technologies; however, it offers a suite of methods that promote interoperability, robustness, and ease of coding middleware. In a nutshell, the capabilities that Koa provides help developers build web applications and APIs faster with higher efficiency. Features of Koa Here are some of the key features that make Koa stand out from other best NodeJS frameworks: The framework is not bundled with any middleware. Small footprint: Being a lightweight and flexible tool, it has a smaller footprint when compared to other NodeJS frameworks. That notwithstanding, you have the flexibility to extend the framework using plugins – you can plug in a wide variety of modules. Contemporary framework: Koa is built using recent technologies and specifications (ECMAScript 2015). As a result, programs developed with it will likely be relevant for an extended period. Bug handling: The framework has features that streamline error handling and make it easier for programmers to spot and get rid of errors. This results in web applications with minimal crashes or issues. Faster development: One of the core goals of top NodeJS frameworks is to make software development faster and more fun. Koa, a lightweight and flexible framework, helps developers to accelerate development with its futuristic technologies. When To Use Koa The same team developed Koa and Express. Express provides features that “augment node,” while Koa was created with the objective to “fix and replace Node.” It stands out because it can simplify error handling and make apps free of callback hell. Instead of Node’s req and res objects, Koa exposes its ctx.request and ctx.response objects. On the flip side, Express augments the node’s req and res objects with extra features like routing and templating, which do not happen with Koa. It’s the ideal framework to use if you want to get rid of callbacks, while Express is suitable when you want to implement NodeJS and conventional NodeJS-style coding. 4. Nest.js Nest.js is a NodeJS framework that is great for developing scalable and efficient server-side applications. Nest has about 800K downloads per week, present in over 1K stacks, and is used by over 200 organizations. It is a progressive framework and an MIT-licensed open-source project. Through official support, an expert from the Nest core team could assist you whenever needed. Nest was developed with TypeScript, uses modern JavaScript, and combines object-oriented programming (OOP), functional programming (FP), and functional reactive programming (FRP). The framework makes application development easy and enables compatibility with a collection of other libraries, including Fastify. Nest stands out from NodeJS frameworks in providing an application architecture for the simplified development of scalable, maintainable, and efficient apps. Features of Nest.js The following are the key features of Nest.js: Nest solves the architecture problem: Even though there are several libraries, helpers, and tools for NodeJS, the server-side architecture problem has not been solved. Thanks to Nest, it offers an application architecture that makes the development of scalable, testable, maintainable, and loosely built applications. Easy to use: Nest.js is a progressive framework that is easy to learn and master. The architecture of this framework is similar to that of Angular, Java, and .Net. As a result, the learning curve is not steep, and developers can easily understand and use this system. It leverages TypeScript. Nest makes application unit testing easy and straightforward Ease of integration: It supports a range of Nest-specific modules. These modules easily integrate with technologies such as TypeORM, Mongoose, and more. It encourages code reusability. Amazing documentation When To Use Nest.js Nest is the ideal framework for the fast and efficient development of applications with simple structures. If you are looking to build apps that are scalable and easy to maintain, Nest is a great option. In addition to being among the fastest-growing NodeJS frameworks, users enjoy a large community and an active support system. With the support platform, developers can receive the official help they need for a dynamic development process, while the Nest community is a great place to interact with other developers and get insights and solutions to common development challenges. 5. Hapi.js This is an open-source NodeJS framework suitable for developing great and scalable web apps. Hapi.js has about 400K downloads per week, present in over 300 stacks, and more than 76 organizations admitted they use Hapi. The framework is ideal for building HTTP-proxy applications, websites, and API servers. Hapi was originally created by Walmart's mobile development team to handle their Black Friday traffic. Since then, it has been improved to become a powerful standalone Node framework that stands out from others with built-in modules and other essential capabilities. Hapi has some out-of-the-box features that enable developers to build scalable applications with minimal overhead. With Hapi, you have got nothing to worry about. The security, simplicity, and satisfaction associated with this framework are everything you need for creating powerful applications and enterprise-grade backend needs. Features of Hapi.js Here are the features that make Hapi one of the best NodeJS frameworks: Security: You do not have to worry about security when using Hapi. Every line of code is thoroughly verified, and there is an advanced security process to ensure the maximum safety of the platform. In addition, Hapi is a leading NodeJS framework with no external code dependencies. Some of the security features and processes include regular updates, end-to-end code hygiene, high-end authentication process, and in-house security architecture. Rich ecosystem: There is a wide range of official plugins. You can easily find a trusted and secure plugin you may need for critical functionalities. With its exhaustive range of plugins, you do not have to risk the security of your project by trusting external middleware – even when it appears to be trustworthy on npm. Quality: When it comes to quantifiable quality metrics, Hapi is one of the frameworks for NodeJS that scores higher than many others. When considering parameters like code clarity, coverage and style, and open issues, Hapi stands out. User experience: The framework enables friction-free development. Being a developer-first platform, there are advanced features to help you speed up some of the processes and increase your productivity. Straightforward implementation: It streamlines the development process and enables you to implement what works directly. The code does exactly what it is created to do; you do not have to waste time experimenting to see what might work or not. Easy-to-learn interface Predictability Extensibility and customization When To Use Hapi.js Hapi does not rely heavily on middleware. Important functionalities like body parsing, input/output validation, HTTP-friendly error objects, and more are integral parts of the framework. There is a wide range of plugins, and it is the only top NodeJS framework that does not depend on external dependencies. With its advanced functionalities, security, and reliability, Hapi stands out from other frameworks like Express (which heavily relies on middleware for a significant part of its capabilities). If you are considering implementing Express for your web app or Rest API project, Hapi is a reliable option. 6. Fastify Fastify is an open-source NodeJS tool with 21.7K stars on Github, 300K weekly downloads, and more than 33 companies have said they use Fastify. This framework provides an outstanding user experience, great plugin architecture, speed, and low overhead. Fastify is inspired by Hapi and Express. Given its performance, it is known as one of the fastest web frameworks. Popular organizations like Skeelo, Satiurn, 2hire, Commons.host, and many more are powered by Fastify. Features of Fastify Fastify is one of the best frameworks for NodeJS. Here are some of its amazing features: Great performance: It is the fastest NodeJS framework with the ability to serve up to 30 thousand requests per second. Fastify focuses on improved responsiveness and user experience, all at a lower cost. Highly extensible: Hooks, decorators, and plugins enable Fastify to be fully extensible. Developer-first framework: The framework is built with coders in mind. It is highly expressive with all the features developers need to build scalable applications faster without compromising quality, performance, and security. If you are looking for a high-performance and developer-friendly framework, Fastify checks off all the boxes. Logging: Due to how crucial and expensive logging is, Fastify works with the best and most affordable logger. TypeScript ready When To Use Fastify This is the ideal framework for building APIs that can handle a lot of traffic. When developing a server, Fastify is a great alternative to Express. If you want a top NodeJS framework that is secure, highly performant, fast, and reliable with low overhead, Fastify stands out as the best option. Conclusion of NodeJS frameworks NodeJS is unarguably a leading software development technology with many reliable and highly performant frameworks. These NodeJS frameworks make application development easier, faster, and more cost-effective. With a well-chosen framework at hand, you are likely to spend fewer resources and time on development – using templates and code libraries. NodeJS frameworks can help you create the type of application you have always wanted. However, the result you get is heavily dependent on the quality of your decision. For instance, choosing a framework that is not the best for the type of project will negatively impact your result. So, make sure you consider the requirements of your project.
Welcome back to the series where we have been building an application with Qwik that incorporates AI tooling from OpenAI. So far we’ve created a pretty cool app that uses AI to generate text and images. Intro and Setup Your First AI Prompt Streaming Responses How Does AI Work Prompt Engineering AI-Generated Images Security and Reliability Deploying Now, there’s just one more thing to do. It’s launch time! I’ll be deploying to Akamai‘s cloud computing services (formerly Linode), but these steps should work with any VPS provider. Let’s do this! Setup Runtime Adapter There are a couple of things we need to get out of the way first: deciding where we are going to run our app, what runtime it will run in, and how the deployment pipeline should look. As I mentioned before, I’ll be deploying to a VPS in Akamai’s connected cloud, but any other VPS should work. For the runtime, I’ll be using Node.js, and I’ll keep the deployment simple by using Git. Qwik is cool because it’s designed to run in multiple JavaScript runtimes. That’s handy, but it also means that our code isn’t ready to run in production as is. Qwik needs to be aware of its runtime environment, which we can do with adapters. We can access see and install available adapters with the command, npm run qwik add. This will prompt us with several options for adapters, integrations, and plugins. For my case, I’ll go down and select the Fastify adapter. It works well on a VPS running Node.js. You can select a different target if you prefer. Once you select your integration, the terminal will show you the changes it’s about to make and prompt you to confirm. You’ll see that it wants to modify some files, create some new ones, install dependencies, and add some new npm scripts. Make sure you’re comfortable with these changes before confirming. Once these changes are installed, your app will have what it needs to run in production. You can test this by building the production assets and running the serve command. (Note: For some reason, npm run build always hangs for me, so I run the client and server build scripts separately). npm run build.client & npm run build.server & npm run serve This will build out our production assets and start the production server listening for requests at http://localhost:3000. If all goes well, you should be able to open that URL in your browser and see your app there. It won’t actually work because it’s missing the OpenAI API keys, but we’ll sort that part out on the production server. Push Changes To Git Repo As mentioned above, this deployment process is going to be focused on simplicity, not automation. So rather than introducing more complex tooling like Docker containers or Kubernetes, we’ll stick to a simpler, but more manual process: using Git to deploy our code. I’ll assume you already have some familiarity with Git and a remote repo you can push to. If not, please go make one now. You’ll need to commit your changes and push it to your repo. git commit -am "ready to commit" & git push origin main Prepare Production Server If you already have a VPS ready, feel free to skip this section. I’ll be deploying to an Akamai VPS. I won’t walk through the step-by-step process for setting up a server, but in case you’re interested, I chose the Nanode 1 GB shared CPU plan for $5/month with the following specs: Operating system: Ubuntu 22.04 LTS Location: Seattle, WA CPU: 1 RAM: 1 GB Storage: 25 GB Transfer: 1 TB Choosing different specs shouldn’t make a difference when it comes to running your app, although some of the commands to install any dependencies may be different. If you’ve never done this before, then try to match what I have above. You can even use a different provider, as long as you’re deploying to a server to which you have SSH access. Once you have your server provisioned and running, you should have a public IP address that looks something like 172.100.100.200. You can log into the server from your terminal with the following command: ssh root@172.100.100.200 You’ll have to provide the root password if you have not already set up an authorized key. We’ll use Git as a convenient tool to get our code from our repo into our server, so that will need to be installed. But before we do that, I always recommend updating the existing software. We can do the update and installation with the following command. sudo apt update && sudo apt install git -y Our server also needs Node.js to run our app. We could install the binary directly, but I prefer to use a tool called NVM, which allows us to easily manage Node versions. We can install it with this command: curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash Once NVM is installed, you can install the latest version of Node with: nvm install node Note that the terminal may say that NVM is not installed. If you exit the server and sign back in, it should work. Upload, Build, and Run App With our server set up, it’s time to get our code installed. With Git, it’s relatively easy. We can copy our code into our server using the clone command. You’ll want to use your own repo, but it should look something like this: git clone https://github.com/AustinGil/versus.git Our source code is now on the server, but it’s still not quite ready to run. We still need to install the NPM dependencies, build the production assets, and provide any environment variables. Let’s do it! First, navigate to the folder where you just cloned the project. I used: cd versus The install is easy enough: npm install The build command is: npm run build However, if you have any type-checking or linting errors, it will hang there. You can either fix the errors (which you probably should) or bypass them and build anyway with this: npm run build.client & npm run build.server The latest version of the project source code has working types if you want to check that. The last step is a bit tricky. As we saw above, environment variables will not be injected from the .env file when running the production app. Instead, we can provide them at runtime right before the serve command like this: OPENAI_API_KEY=your_api_key npm run serve You’ll want to provide your own API key there in order for the OpenAI requests to work. Also, for Node.js deployments, there’s an extra, necessary step. You must also set an ORIGIN variable assigned to the full URL where the app will be running. Qwik needs this information to properly configure their CSRF protection. If you don’t know the URL, you can disable this feature in the /src/entry.preview.tsx file by setting the createQwikCity options property checkOrigin to false: export default createQwikCity({ render, qwikCityPlan, checkOrigin: false }); This process is outlined in more detail in the docs, but it’s recommended not to disable, as CSRF can be quite dangerous. And anyway, you’ll need a URL to deploy the app anyway, so better to just set the ORIGIN environment variable. Note that if you make this change, you’ll want to redeploy and rerun the build and serve commands. If everything is configured correctly and running, you should start seeing the logs from Fastify in the terminal, confirming that the app is up and running. {"level":30,"time":1703810454465,"pid":23834,"hostname":"localhost","msg":"Server listening at http://[::1]:3000"} Unfortunately, accessing the app via IP address and port number doesn’t show the app (at least not for me). This is likely a networking issue, but also something that will be solved in the next section, where we run our app at the root domain. The Missing Steps Technically, the app is deployed, built, and running, but in my opinion, there is a lot to be desired before we can call it “production-ready.” Some tutorials would assume you know how to do the rest, but I don’t want to do you like that. We’re going to cover: Running the app in background mode Restarting the app if the server crashes Accessing the app at the root domain Setting up an SSL certificate One thing you will need to do for yourself is buy the domain name. There are lots of good places. I’ve been a fan of Porkbun and Namesilo. I don’t think there’s a huge difference for which registrar you use, but I like these because they offer WHOIS privacy and email forwarding at no extra charge on top of their already low prices. Before we do anything else on the server, it’ll be a good idea to point your domain name’s A record (@) to the server’s IP address. Doing this sooner can help with propagation times. Now, back in the server, there’s one glaring issue we need to deal with first. When we run the npm run serve command, our app will run as long as we keep the terminal open. Obviously, it would be nice to exit out of the server, close our terminal, and walk away from our computer to go eat pizza without the app crashing. So we’ll want to run that command in the background. There are plenty of ways to accomplish this: Docker, Kubernetes, Pulumis, etc., but I don’t like to add too much complexity. So for a basic app, I like to use PM2, a Node.js process manager with great features, including the ability to run our app in the background. From inside your server, run this command to install PM2 as a global NPM module: npm install -g pm2 Once it’s installed, we can tell PM2 what command to run with the “start” command: pm2 start "npm run serve" PM2 has a lot of really nice features in addition to running our apps in the background. One thing you’ll want to be aware of is the command to view logs from your app: pm2 logs In addition to running our app in the background, PM2 can also be configured to start or restart any process if the server crashes. This is super helpful to avoid downtime. You can set that up with this command: pm2 startup Ok, our app is now running and will continue to run after a server restart. Great! But we still can’t get to it. Lol! My preferred solution is using Caddy. This will resolve the networking issues, work as a great reverse proxy, and take care of the whole SSL process for us. We can follow the install instructions from their documentation and run these five commands: sudo apt install -y debian-keyring debian-archive-keyring apt-transport-https curl curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | sudo gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | sudo tee /etc/apt/sources.list.d/caddy-stable.list sudo apt update sudo apt install caddy Once that’s done, you can go to your server’s IP address and you should see the default Caddy welcome page: Progress! In addition to showing us something is working, this page also gives us some handy information on how to work with Caddy. Ideally, you’ve already pointed your domain name to the server’s IP address. Next, we’ll want to modify the Caddyfile: sudo nano /etc/caddy/Caddyfile As their instructions suggest, we’ll want to replace the :80 line with our domain (or subdomain), but instead of uploading static files or changing the site root, I want to remove (or comment out) the root line and enable the reverse_proxy line, pointing the reverse proxy to my Node.js app running at port 3000. versus.austingil.com { reverse_proxy localhost:3000 } After saving the file and reloading Caddy (systemctl reload caddy), the new Caddyfile changes should take effect. Note that it may take a few moments before the app is fully up and running. This is because one of Caddy’s features is to provision a new SSL certificate for the domain. It also sets up the automatic redirect from HTTP to HTTPS. So now if you go to your domain (or subdomain), you should be redirected to the HTTPS version running a reverse proxy in front of your generative AI application which is resilient to server crashes. How awesome is that!? Using PM2 we can also enable some load-balancing in case you’re running a server with multiple cores. The full PM2 command including environment variables and load-balancing might look something like this: OPENAI_API_KEY=your_api_key ORIGIN=example.com pm2 start "npm run serve" -i max Note that you may need to remove the current instance from PM2 and rerun the start command, you don’t have to restart the Caddy process unless you change the Caddy file, and any changes to the Node.js source code will require a rebuild before running it again. Hell Yeah! We Did It! Alright, that’s it for this blog post and this series. I sincerely hope you enjoyed both and learned some cool things. Today, we covered a lot of things you need to know to deploy an AI-powered application: Runtime adapters Building for production Environment variables Process managers Reverse-proxies SSL certificates If you missed any of the previous posts, be sure to go back and check them out. I’d love to know what you thought about the whole series. If you want, you can play with the app I built. Let me know if you deployed your own app. Also, if you have ideas for topics you’d like me to discuss in the future I’d love to hear them :) UPDATE: If you liked this project and are curious to see what it might look like as a SvelteKit app, check out this blog post by Tim Smith where he converts this existing app over. Thank you so much for reading.
This new era is characterized by the rise of decentralized applications (DApps), which operate on blockchain technology, offering enhanced security, transparency, and user sovereignty. As a full-stack developer, understanding how to build DApps using popular tools like Node.js is not just a skill upgrade; it's a doorway to the future of web development. In this article, we'll explore how Node.js, a versatile JavaScript runtime, can be a powerful tool in the creation of DApps. We'll walk through the basics of Web 3.0 and DApps, the role of Node.js in this new environment, and provide practical guidance on building a basic DApp. Section 1: Understanding the Basics Web 3.0: An Overview Web 3.0, often referred to as the third generation of the internet, is built upon the core concepts of decentralization, openness, and greater user utility. In contrast to Web 2.0, where data is centralized in the hands of a few large companies, Web 3.0 aims to return control and ownership of data back to users. This is achieved through blockchain technology, which allows for decentralized storage and operations. Decentralized Applications (DApps) Explained DApps are applications that run on a decentralized network supported by blockchain technology. Unlike traditional applications, which rely on centralized servers, DApps operate on a peer-to-peer network, which makes them more resistant to censorship and central points of failure. The benefits of DApps include increased security and transparency, reduced risk of data manipulation, and improved trust and privacy for users. However, they also present challenges, such as scalability issues and the need for new development paradigms. Section 2: The Role of Node.js in Web 3.0 Why Node.js for DApp Development Node.js, renowned for its efficiency and scalability in building network applications, stands as an ideal choice for DApp development. Its non-blocking, event-driven architecture makes it well-suited for handling the asynchronous nature of blockchain operations. Here's why Node.js is a key player in the Web 3.0 space: Asynchronous processing: Blockchain transactions are inherently asynchronous. Node.js excels in handling asynchronous operations, making it perfect for managing blockchain transactions and smart contract interactions. Scalability: Node.js can handle numerous concurrent connections with minimal overhead, a critical feature for DApps that might need to scale quickly. Rich ecosystem: Node.js boasts an extensive ecosystem of libraries and tools, including those specifically designed for blockchain-related tasks, such as Web3.js and ethers.js. Community and support: With a large and active community, Node.js offers vast resources for learning and troubleshooting, essential for the relatively new field of Web 3.0 development. Setting up the Development Environment To start developing DApps with Node.js, you need to set up an environment that includes the following tools and frameworks: Node.js: Ensure you have the latest stable version of Node.js installed. NPM (Node Package Manager): Comes with Node.js and is essential for managing packages. Truffle suite: A popular development framework for Ethereum, useful for developing, testing, and deploying smart contracts. Ganache: Part of the Truffle Suite, Ganache allows you to run a personal Ethereum blockchain on your local machine for testing and development purposes. Web3.js or ethers.js libraries: These JavaScript libraries allow you to interact with a local or remote Ethereum node using an HTTP or IPC connection. With these tools, you’re equipped to start building DApps that interact with Ethereum or other blockchain networks. Section 3: Building a Basic Decentralized Application Designing the DApp Architecture Before diving into coding, it's crucial to plan the architecture of your DApp. This involves deciding on the frontend and backend components, the blockchain network to interact with, and how these elements will communicate with each other. Frontend: This is what users will interact with. It can be built with any frontend technology, but in this context, we'll focus on integrating it with a Node.js backend. Backend: The backend will handle business logic, interact with the blockchain, and provide APIs for the front end. Node.js, with its efficient handling of I/O operations, is ideal for this. Blockchain interaction: Your DApp will interact with a blockchain, typically through smart contracts. These are self-executing contracts with the terms of the agreement directly written into code. Developing the Backend With Node.js Setting up a Node.js server: Create a new Node.js project and set up an Express.js server. This server will handle API requests from your front end. Writing smart contracts: You can write smart contracts in Solidity (for Ethereum-based DApps) and deploy them to your blockchain network. Integrating smart contracts with Node.js: Use the Web3.js or ethers.js library to interact with your deployed smart contracts. This integration allows your Node.js server to send transactions and query data from the blockchain. Connecting to a Blockchain Network Choosing a blockchain: Ethereum is a popular choice due to its extensive support and community, but other blockchains like Binance Smart Chain or Polkadot can also be considered based on your DApp’s requirements. Local blockchain development: Use Ganache for a local blockchain environment, which is crucial for development and testing. Integration with Node.js: Utilize Web3.js or ethers.js to connect your Node.js application to the blockchain. These libraries provide functions to interact with the Ethereum blockchain, such as sending transactions, interacting with smart contracts, and querying blockchain data. Section 4: Frontend Development and User Interface Building the Frontend Developing the front end of a DApp involves creating user interfaces that interact seamlessly with the blockchain via your Node.js backend. Here are key steps and considerations: Choosing a framework: While you can use any frontend framework, React.js is a popular choice due to its component-based architecture and efficient state management, which is beneficial for responsive DApp interfaces. Designing the user interface: Focus on simplicity and usability. Remember, DApp users might range from blockchain experts to novices, so clarity and ease of use are paramount. Integrating with the backend: Use RESTful APIs or GraphQL to connect your front end with the Node.js backend. This will allow your application to send and receive data from the server. Interacting With the Blockchain Web3.js or ethers.js on the front end: These libraries can also be used on the client side to interact directly with the blockchain for tasks like initiating transactions or querying smart contract states. Handling transactions: Implement UI elements to show transaction status and gas fees and to facilitate wallet connections (e.g., using MetaMask). Ensuring security and privacy: Implement standard security practices such as SSL/TLS encryption, and be mindful of the data you expose through the front end, considering the public nature of blockchain transactions. User Experience in DApps Educating the user: Given the novel nature of DApps, consider including educational tooltips or guides. Responsive and interactive design: Ensure the UI is responsive and provides real-time feedback, especially important during blockchain transactions which might take longer to complete. Accessibility: Accessibility is often overlooked in DApp development. Ensure that your application is accessible to all users, including those with disabilities. Section 5: Testing and Deployment Testing Your DApp Testing is a critical phase in DApp development, ensuring the reliability and security of your application. Here’s how you can approach it: Unit testing smart contracts: Use frameworks like Truffle or Hardhat for testing your smart contracts. Write tests to cover all functionalities and potential edge cases. Testing the Node.js backend: Implement unit and integration tests for your backend using tools like Mocha and Chai. This ensures your server-side logic and blockchain interactions are functioning correctly. Frontend testing: Use frameworks like Jest (for React apps) to test your frontend components. Ensure that the UI interacts correctly with your backend and displays blockchain data accurately. End-to-end testing: Conduct end-to-end tests to simulate real user interactions across the entire application. Tools like Cypress can automate browser-based interactions. Deployment Strategies for DApps Deploying a DApp involves multiple steps, given its decentralized nature: Smart contract deployment: Deploy your smart contracts to the blockchain. This is typically done on a testnet before moving to the mainnet. Verify and publish your contract source code, if applicable, for transparency. Backend deployment: Choose a cloud provider or a server to host your Node.js backend. Consider using containerization (like Docker) for ease of deployment and scalability. Frontend deployment: Host your front end on a web server. Static site hosts like Netlify or Vercel are popular choices for projects like these. Ensure that the frontend is securely connected to your backend and the blockchain. Post-Deployment Considerations Monitoring and maintenance: Regularly monitor your DApp for any issues, especially performance and security-related. Keep an eye on blockchain network updates that might affect your DApp. User feedback and updates: Be prepared to make updates based on user feedback and ongoing development in the blockchain ecosystem. Community building: Engage with your user community for valuable insights and to foster trust in your DApp. Section 6: Advanced Topics and Best Practices Advanced Node.js Features for DApps Node.js offers a range of advanced features that can enhance the functionality and performance of DApps: Stream API for efficient data handling: Utilize Node.js streams for handling large volumes of data, such as blockchain event logs, efficiently. Cluster module for scalability: Leverage the Cluster module to handle more requests and enhance the performance of your DApp. Using caching for improved performance: Implement caching strategies to reduce load times and enhance user experience. Security Best Practices Security is paramount in DApps due to their decentralized nature and value transfer capabilities: Smart contract security: Conduct thorough audits of smart contracts to prevent vulnerabilities like reentrancy attacks or overflow/underflow. Backend security: Secure your Node.js backend by implementing rate limiting, CORS (Cross-Origin Resource Sharing), and using security modules like Helmet. Frontend security measures: Ensure secure communication between the front end and the back end. Validate user input to prevent XSS (Cross-Site Scripting) and CSRF (Cross-Site Request Forgery) attacks. Performance Optimization Optimizing the performance of DApps is essential for user retention and overall success: Optimize smart contract interactions: Minimize on-chain transactions and optimize smart contract code to reduce gas costs and improve transaction times. Backend optimization: Use load balancing and optimize your database queries to handle high loads efficiently. Frontend performance: Implement lazy loading, efficient state management, and optimize resource loading to speed up your front end. Staying Updated With Web 3.0 Developments Web 3.0 is a rapidly evolving field. Stay updated with the latest developments in blockchain technology, Node.js updates, and emerging standards in the DApp space. Encouraging Community Contributions Open-source contributions can significantly improve the quality of your DApp. Encourage and facilitate community contributions to foster a collaborative development environment. Conclusion The journey into the realm of Web 3.0 and decentralized applications is not just a technological leap but a step towards a new era of the internet — one that is more secure, transparent, and user-centric. Through this article, we've explored how Node.js, a robust and versatile technology, plays a crucial role in building DApps, offering the scalability, efficiency, and rich ecosystem necessary for effective development. From understanding the basics of Web 3.0 and DApps, diving into the practicalities of using Node.js, to detailing the nuances of frontend and backend development, testing, deployment, and best practices, we have covered a comprehensive guide for anyone looking to embark on this exciting journey. As you delve into the world of decentralized applications, remember that this field is constantly evolving. Continuous learning, experimenting, and adapting to new technologies and practices are key. Engage with the community, contribute to open-source projects, and stay abreast of the latest trends in blockchain and Web 3.0. The future of the web is decentralized, and as a developer, you have the opportunity to be at the forefront of this revolution. Embrace the challenge, and use your skills and creativity to build applications that contribute to a more open, secure, and user-empowered internet.
We just published a new ScyllaDB sample application, a video streaming app. The project is available on GitHub. This blog covers the video streaming application’s features and tech stack and breaks down the data modeling process. Video Streaming App Features The app has a minimal design with the most essential video streaming application features: List all videos, sorted by creation date (home page) List videos that you started watching Watch video Continue watching a video where you left off Display a progress bar under each video thumbnail Technology Stack Programming language: TypeScript Database: ScyllaDB Framework: NextJS (pages router) Component library: Material_UI Using ScyllaDB for Low-Latency Video Streaming Applications ScyllaDB is a low-latency and high-performance NoSQL database compatible with Apache Cassandra and DynamoDB. It is well-suited to handle the large-scale data storage and retrieval requirements of video streaming applications. ScyllaDB has drivers in all the popular programming languages, and, as this sample application demonstrates, it integrates well with modern web development frameworks like NextJS. Low latency in the context of video streaming services is crucial for delivering a seamless user experience. To lay the groundwork for high performance, you need to design a data model that fits your needs. Let’s continue with an example data modeling process to see what that looks like. Video Streaming App Data Modeling In the ScyllaDB University Data Modeling course, we teach that NoSQL data modeling should always start with your application and queries first. Then, you work backward and create the schema based on the queries you want to run in your app. This process ensures that you create a data model that fits your queries and meets your requirements. With that in mind, let’s go over the queries that our video streaming app needs to run on each page load! Page: Continue Watching On this page, you can list all the videos that they’ve started to watch. This view includes the video thumbnails and the progress bar under the thumbnail. Query: Get Watch Progress CQL SELECT video_id, progress FROM watch_history WHERE user_id = ? LIMIT 9; Schema: Watch History Table CQL CREATE TABLE watch_history ( user_id text, video_id text, progress int, watched_at timestamp, PRIMARY KEY (user_id) ); For this query, it makes sense to define user_id as the partition key because that is the filter we use to query the watch history table. Keep in mind that this schema might need to be updated later if there is a query that requires filtering on other columns beyond the user_id. For now, though, this schema is correct for the defined query. Besides the progress value, the app also needs to fetch the actual metadata of each video (for example, the title and the thumbnail image). For this, the `video` table has to be queried. Query: Get Video Metadata CQL SELECT * FROM video WHERE id IN ?; Notice how we use the “IN” operator and not “=” because we need to fetch a list of videos, not just a single video. Schema: Video Table CQL CREATE TABLE video ( id text, content_type text, title text, url text, thumbnail text, created_at timestamp, duration int, PRIMARY KEY (id) ); For the video table, let’s define the id as the partition key because that’s the only filter we use in the query. Page: Watch Video If you click on any of the “Watch” buttons, they will be redirected to a page with a video player where they can start and pause the video. Query: Get Video Content CQL SELECT * FROM video WHERE id = ?; This is a very similar query to the one that runs on the Continue Watching page. Thus, the same schema will work just fine for this query as well. Schema: Video Table CQL CREATE TABLE video ( id text, content_type text, title text, url text, thumbnail text, created_at timestamp, duration int, PRIMARY KEY (id) ); Page: Most Recent Videos Finally, let’s break down the Most Recent Videos page, which is the home page of the application. We analyze this page last because it is the most complex one from a data modeling perspective. This page lists ten of the most recently uploaded videos that are available in the database, ordered by the video creation date. We will have to fetch these videos in two steps: first, get the timestamps, then get the actual video content. Query: Get the Most Recent Ten Videos’ Timestamp CQL SELECT id, top10(created_at) AS date FROM recent_videos; You might notice that we use a custom function called top10(). This is not a standard function in ScyllaDB. It’s a UDF (user-defined function) that we created to solve this data modeling problem. This function returns an array of the most recent created_at timestamps in the table. Creating a new UDF in ScyllaDB can be a great way to solve your unique data modeling challenges. These timestamp values can then be used to query the actual video content that we want to show on the page. Query: Get Metadata for Those Videos CQL SELECT * FROM recent_videos WHERE created_at IN ? LIMIT 10; Schema: Recent Videos CQL CREATE MATERIALIZED VIEW recent_videos_view AS SELECT * FROM streaming.video WHERE created_at IS NOT NULL PRIMARY KEY (created_at, id); In the recent videos' materialized view, the created_at column is the primary key because we filter by that column in our first query to get the most recent timestamp values. Be aware that, in some cases, this can cause a hot partition. Furthermore, the UI also shows a small progress bar under each video’s thumbnail which indicates the progress you made watching that video. To fetch this value for each video, the app has to query the watch history table. Query: Get Watch Progress for Each Video CQL SELECT progress FROM watch_history WHERE user_id = ? AND video_id = ?; Schema: Watch History CQL CREATE TABLE watch_history ( user_id text, video_id text, progress int, watched_at timestamp, PRIMARY KEY (user_id, video_id) ); You might have noticed that the watch history table was already used in a previous query to fetch data. Now this time, the schema has to be modified slightly to fit this query. Let’s add video_id as a clustering key. This way, the query to fetch watch progress will work correctly. That’s it. Now, let’s see the final database schema! Final Database Schema CQL CREATE KEYSPACE IF NOT EXISTS streaming WITH replication = { 'class': 'NetworkTopologyStrategy', 'replication_factor': '3' }; CREATE TABLE streaming.video ( id text, content_type text, title text, url text, thumbnail text, created_at timestamp, duration int, PRIMARY KEY (id) ); CREATE TABLE streaming.watch_history ( user_id text, video_id text, progress int, watched_at timestamp, PRIMARY KEY (user_id, video_id) ); CREATE TABLE streaming.recent_videos ( id text, content_type text, title text, url text, thumbnail text, created_at timestamp, duration int, PRIMARY KEY (created_at) ); User-Defined Function for the Most Recent Videos Page CQL -- Create a UDF for recent videos CREATE OR REPLACE FUNCTION state_f(acc list<timestamp>, val timestamp) CALLED ON NULL INPUT RETURNS list<timestamp> LANGUAGE lua AS $$ if val == nil then return acc end if acc == nil then acc = {} end table.insert(acc, val) table.sort(acc, function(a, b) return a > b end) if #acc > 10 then table.remove(acc, 11) end return acc $$; CREATE OR REPLACE FUNCTION reduce_f(acc1 list<timestamp>, acc2 list<timestamp>) CALLED ON NULL INPUT RETURNS list<timestamp> LANGUAGE lua AS $$ result = {} i = 1 j = 1 while #result < 10 do if acc1[i] > acc2[j] then table.insert(result, acc1[i]) i = i + 1 else table.insert(result, acc2[j]) j = j + 1 end end return result $$; CREATE OR REPLACE AGGREGATE top10(timestamp) SFUNC state_f STYPE list<timestamp> REDUCEFUNC reduce_f; This UDF uses Lua, but you could also use Wasm to create UDFs in ScyllaDB. Creating the function make sure to enable UDFs in the scylla.yaml configuration file (location: /etc/scylla/scylla.yaml): Clone the Repo and Get Started! To get started… Clone the repository:git clone https://github.com/scylladb/video-streaming Install the dependencies:npm install Modify the configuration file: Plain Text APP_BASE_URL="http://localhost:8000" SCYLLA_HOSTS="172.17.0.2" SCYLLA_USER="scylla" SCYLLA_PASSWD="xxxxx" SCYLLA_KEYSPACE="streaming" SCYLLA_DATACENTER="datacenter1" Migrate the database and insert sample data:npm run migrate Run the server:npm run dev Wrapping Up We hope you enjoy our video streaming app, and it helps you build low-latency and high-performance applications with ScyllaDB. If you want to keep on learning, check out ScyllaDB University, where we have free courses on data modeling, ScyllaDB drivers, and much more! If you have questions about the video streaming sample app or ScyllaDB, go to our forum, and let’s discuss! More ScyllaDB sample applications: CarePet – IoT Cloud Getting Started guide Feature Store Relevant resources: Video streaming app GitHub repository UDFs in ScyllaDB How ScyllaDB Distributed Aggregates Reduce Query Execution Time up to 20X Wasmtime: Supporting UDFs in ScyllaDB with WebAssembly ScyllaDB documentation
"We will soon migrate to TypeScript, and then. . . " How often do you hear this phrase? Perhaps, if you mainly work within a single project or mostly just start new projects from scratch, this is a relatively rare expression for you to hear. For me, as someone working in an outsourcing company, who, in addition to my main project, sees dozens of various other projects every month, it is a quite common phrase from the development team or a client who would like to upgrade their project stack for easier team collaboration. Spoiler alert: it is probably not going to be as soon of a transition as you think (most likely, never). While it may sound drastic, in most cases, this will indeed be the case. Most people who have not undergone such a transition may not be aware of the dozens of nuances that can arise during a project migration to TypeScript. For instance, are you prepared for the possibility that your project build, which took tens of seconds in pure JavaScript, might suddenly start taking tens of minutes when using TypeScript? Of course, it depends on your project's size, your pipeline configuration, etc., but these scenarios are not fabricated. You, as a developer, might be prepared for this inevitability, but what will your client think when you tell them that the budget for the server instance needs to be increased because the project build is now failing due to a heap out-of-memory error after adding TypeScript to the project? Yes, TypeScript, like any other tool, is not free. On the Internet, you can find a large number of articles about how leading companies successfully migrated their projects from pure JavaScript to TypeScript. While they usually describe a lot of the issues they had during the transition and how they overcame them, there are still many unspoken obstacles that people can encounter which can become critical to your migration. Despite the awareness among most teams that adding typing to their projects through migration to TypeScript might not proceed as smoothly as depicted in various articles, they still consider TypeScript as the exclusive and definitive solution to address typing issues in their projects. This mindset can result in projects remaining in pure JavaScript for extended periods, and the eagerly anticipated typing remains confined to the realm of dreams. While alternative tools for introducing typing to JavaScript code do exist, TypeScript's overwhelming popularity often casts them into the shadows. This widespread acclaim, justified by the TypeScript team's active development, may, however, prove disadvantageous to developers. Developers tend to lean towards the perception that TypeScript is the only solution to typing challenges in a project, neglecting other options. Next, we will consider JSDoc as a tool that, when used correctly and understood in conjunction with other tools (like TypeScript), can help address the typing issue in a project virtually for free. Many might think that the functionality of JSDoc pales in comparison to TypeScript, and comparing them is unfair. To some extent, that is true, but on the other hand, it depends on the perspective. Each technology has its pros and cons, counterbalancing the other. Code examples will illustrate a kind of graceful degradation from TypeScript to JavaScript while maintaining typing functionality. While for some, this might appear as a form of progressive enhancement, again, it all depends on how you look at it. TypeScript to JSDoc: My vanilla JavaScript enums JSDoc and Its Extensions JSDoc is a specification for the comment format in JavaScript. This specification allows developers to describe the structure of their code, data types, function parameters, and much more using special comments. These comments can then be transformed into documentation using appropriate tools. JavaScript /** * Adds two numbers. * @param {number} a - The first number. * @param {number} b - The second number. * @returns {number} The sum of the two numbers. */ const getSum = (a, b) => { return a + b } Initially, JSDoc was created with the goal of generating documentation based on comments, and this functionality remains a significant part of the tool. However, it is not the only aspect. The second substantial aspect of the tool is the description of various types within the program: variable types, object types, function parameters, and many other structures. Since the fate of ECMAScript 4 was uncertain, and many developers lacked (and still lack to this day) proper typing, JSDoc started adding this much-needed typing to JavaScript. This contributed to its popularity, and as a result, many other tools began to rely on the JSDoc syntax. An interesting fact is that while the JSDoc documentation provides a list of basic tags, the specification itself allows developers to expand the list based on their needs. Tools built on top of JSDoc leverage this flexibility to the maximum by adding their own custom tags. Therefore, encountering a pure JSDoc setup is a relatively rare occurrence. TypeScript to JSDoc: Function typing The most well-known tools that rely on JSDoc are Closure Compiler (not to be confused with the Closure programming language) and TypeScript. Both of these tools can help make your JavaScript typed, but they approach it differently. Closure Compiler primarily focuses on enhancing your .js files by adding typing through JSDoc annotations (after all, they are just comments), while TypeScript is designed for .ts files, introducing its own well-known TypeScript constructs such as type, interface, enum, namespace, and so on. Not from its inception, but starting from version 2.3, TypeScript began allowing something similar to Closure Compiler – checking type annotations in .js files based on the use of JSDoc syntax. With this version, and with each subsequent version, TypeScript not only added support for JSDoc but also incorporated many of the core tags and constructs present in the Closure Compiler. This made migration to TypeScript more straightforward. While Closure Compiler is still being updated, used by some teams, and remains the most effective tool for code compression in JavaScript (if its rules are followed), due to support for checking .js files and various other updates brought by the TypeScript team, Closure Compiler eventually lost to TypeScript. From the implementation perspective, incorporating an understanding of JSDoc notation into TypeScript is not a fundamental change. Whether it is TypeScript types or JSDoc types, ultimately, they both become part of the AST (Abstract Syntax Tree) of the executed program. This is convenient for us as developers because all our everyday tools, such as ESLint (including all its plugins), Prettier, and others primarily rely on the AST. Therefore, regardless of the file extensions we use, our favorite plugins can continue to work in both .js and .ts files (with some exceptions, of course). TypeScript to JSDoc: Type declaration Developer Experience When adding typing to JavaScript code using JSDoc, it is advisable to use additional tools that enhance the development experience. eslint-plugin-jsdoc is a JSDoc plugin for ESLint. This plugin reports errors in case of invalid JSDoc syntax usage and helps standardize the written JSDoc. An important setting for this plugin is the mode option, which offers one of the following values: typescript, closure (referring to Closure Compiler), or jsdoc. As mentioned earlier, JSDoc can vary, and this option allows you to specify which JSDoc tags and syntax to use. The default value is typescript (though this has not always been the case), which, given TypeScript's dominance over other tools that work with JSDoc, seems like a sensible choice. TypeScript to JSDoc: Type casting It is also important to choose a tool for analyzing the type annotations written in JSDoc, and in this case, it will be TypeScript. This might sound strange because, in this article, it seems like we are discussing its replacement. However, we are not using TypeScript for its primary purpose – our files still have the .js extension. We will only use TypeScript as a type checking linter. In most projects where TypeScript is used fully, there is typically a build script responsible for compiling .ts files into .js. In the case of using TypeScript as a linting tool, instead of a buildcommand handling compilation, you will have a command for linting your types. JavaScript // package.json { "scripts": { "lint:type": "tsc --noEmit" } } If, in the future, a tool emerges that surpasses TypeScript as a linting tool for project typing, we can always replace it in this script. To make this script work correctly, you need to create a tsconfig.json file in your project or add additional parameters to this script. These parameters include allowJs and checkJs, which allow TypeScript to check code written in .js files. In addition to these parameters, you can enable many others. For example, to make type checking stricter, you can use strict, noUncheckedIndexedAccess, exactOptionalPropertyTypes, noPropertyAccessFromIndexSignature, and more. TypeScript will rigorously check your code even if you are using .js files. The TypeScript team consistently enhances the functionality of TypeScript when working with JSDoc. With almost every release, they introduce both fixes and new features. The same applies to code editors. Syntax highlighting and other DX features provided by TypeScript when working with .ts files also work when dealing with .js files using JSDoc. Although there are occasional instances where support for certain JSDoc features may come later, many GitHub issues labeled with JSDoc in the TypeScript backlog indicate that the TypeScript team continues to work on improving JSDoc support. TypeScript to JSDoc: Generics Many might mention the nuance that when using TypeScript solely for .js files, you are deprived of the ability to use additional constructs provided by TypeScript; for example, Enums, Namespaces, Class Parameter Properties, Abstract Classes and Members, Experimental (!) Decorators, and others, as their syntax is only available in files with the .ts extension. Again, for some, this may seem like a drawback, but for others, it could be considered a benefit, as most of these constructs have their drawbacks. Primarily, during TypeScript compilation to JavaScript, anything related to types simply disappears. In the case of using the aforementioned constructs, all of them are translated into less-than-optimal JavaScript code. If this does not sound compelling enough for you to refrain from using them, you can explore the downsides of each of these constructs on your own, as there are plenty of articles on the Internet discussing these issues. Overall, the use of these constructs is generally considered an anti-pattern. On most of my projects where I use TypeScript to its full extent (with all my code residing in .ts files), I always use a custom ESLint rule: JavaScript // eslint.config.js /** @type {import('eslint').Linter.FlatConfig} */ const config = { rules: { 'no-restricted-syntax': [ 'error', { selector: 'TSEnumDeclaration,TSModuleDeclaration,TSParameterProperty,ClassDeclaration[abstract=true],Decorator', message: 'TypeScript shit is forbidden.', }, ], }, } This rule prohibits the use of TypeScript constructs that raise concerns. When considering what remains of TypeScript when applying this ESLint rule, essentially, only the typing aspect remains. In this context, when using this rule, leveraging JSDoc tags and syntax provided by TypeScript for adding typing to .js files is almost indistinguishable from using TypeScript with .ts files. TypeScript to JSDoc: Class and its members As mentioned earlier, most tools rely on AST for their operations, including TypeScript. TypeScript does not care whether you define types using TypeScript's keywords and syntax or JSDoc tags supported by TypeScript. This principle also applies to ESLint and its plugins, including the typescript-eslint plugin. This means that we can use this plugin and its powerful rules to check typing even if the entire code is written in .js files (provided you enabled the appropriate parser). Unfortunately, a significant drawback when using these tools with .js files is that some parts of these tools, such as specific rules in typescript-eslint, rely on the use of specific TypeScript keywords. Examples of such rules include explicit-function-return-type, explicit-member-accessibility, no-unsafe-return, and others that are tied explicitly to TypeScript keywords. Fortunately, there are not many such rules. Despite the fact that these rules could be rewritten to use AST, the development teams behind these rules are currently reluctant to do so due to the increased complexity of support when transitioning from using keywords to AST. Conclusion JSDoc, when used alongside TypeScript as a linting tool, provides developers with a powerful means of typing .js files. Its functionality does not lag significantly behind TypeScript when used to its full potential, keeping all the code in .ts files. Utilizing JSDoc allows developers to introduce typing into a pure JavaScript project right now, without delaying it as a distant dream of a full migration to TypeScript (which most likely will never happen). Many mistakenly spend too much time critiquing the JSDoc syntax, deeming it ugly, especially when compared to TypeScript. It is hard to argue otherwise, TypeScript's syntax does indeed look much more concise. However, what is truly a mistake is engaging in empty discussions about syntax instead of taking any action. In the end, you will probably want to achieve a similar result, as shown in the screenshot below. Performing such a migration is significantly easier and more feasible when transitioning from code that already has typing written in JSDoc. JSDoc to TypeScript: Possibly a long-awaited migration; React Component By the way, many who label the JSDoc syntax as ugly, while using TypeScript as their sole primary tool, after such remarks, nonchalantly return to their .ts files, fully embracing TS Enums, TS Parameter Properties, TS Experimental (!) Decorators, and other TS constructs that might raise questions. Do they truly believe they are on the right side? Most of the screenshots were taken from the migration of .ts files to .js while preserving type functionality in my library form-payload (here is the PR). Why did I decide to make this migration? Because I wanted to. Although this is far from my only experience with such migrations. Interestingly, the sides of migrations often change (migrations from .js to .ts undoubtedly occur more frequently). Despite my affection for TypeScript and its concise syntax, after several dozen files written/rewritten using JSDoc, I stopped feeling any particular aversion to the JSDoc syntax, as it is just syntax. Summing Up JSDoc provides developers with real opportunities for gradually improving the codebase without requiring a complete transition to TypeScript from the start of migration. It is essential to remember that the key is not to pray to the TypeScript-god but to start taking action. The ultimate transition to using TypeScript fully is possible, but you might also realize that JSDoc is more than sufficient for successful development, as it has its advantages. For example, here is what a "JSDoc-compiler" might look like: JavaScript // bundler.js await esbuild.build({ entryPoints: [jsMainEntryPoint], minify: true, // ✓ }) Give it a try! Do not stand still, continually develop your project, and I am sure you will find many other benefits!
Welcome back to this series where we’re building web applications with AI tooling. Intro and Setup Your First AI Prompt Streaming Responses How Does AI Work Prompt Engineering AI-Generated Images Security and Reliability Deploying In the previous post, we got AI-generated jokes into our Qwik application from OpenAI API. It worked, but the user experience suffered because we had to wait until the API completed the entire response before updating the client. A better experience, as you’ll know if you’ve used any AI chat tools, is to respond as soon as each bit of text is generated. It becomes a sort of teletype effect. That’s what we’re going to build today using HTTP streams. Prerequisites Before we get into streams, we need to explore something with a Qwik quirk related to HTTP requests. If we examine the current POST request being sent by the form, we can see that the returned payload isn’t just the plain text we returned from our action handler. Instead, it’s this sort of serialized data. This is the result of how the Qwik Optimizer lazy loads assets, and is necessary to properly handle the data as it comes back. Unfortunately, this prevents standard streaming responses. So while routeAction$ and the Form component are super handy, we’ll have to do something else. To their credit, the Qwik team does provide a well-documented approach for streaming responses. However, it involves their server$ function and async generator functions. This would probably be the right approach if we’re talking strictly about Qwik, but this series is for everyone. I’ll avoid this implementation, as it’s too specific to Qwik, and focus on broadly applicable concepts instead. Refactor Server Logic It sucks that we can’t use route actions because they’re great. So what can we use? Qwik City offers a few options. The best I found is middleware. They provide enough access to primitive tools that we can accomplish what we need, and the concepts will apply to other contexts besides Qwik. Middleware is essentially a set of functions that we can inject at various points within the request lifecycle of our route handler. We can define them by exporting named constants for the hooks we want to target (onRequest, onGet, onPost, onPut, onDelete). So instead of relying on a route action, we can use a middleware that hooks into any POST request by exporting an onPost middleware. In order to support streaming, we’ll want to return a standard Response object. We can do so by creating a Response object and passing it to the requestEvent.send() method. Here’s a basic (non-streaming) example: /** @type {import('@builder.io/qwik-city').RequestHandler} */ export const onPost = (requestEvent) => { requestEvent.send(new Response('Hello Squirrel!')) } Before we tackle streaming, let’s get the same functionality from the old route action implemented with middleware. We can copy most of the code into the onPost middleware, but we won’t have access to formData. Fortunately, we can recreate that data from the requestEvent.parseBody() method. We’ll also want to use requestEvent.send() to respond with the OpenAI data instead of a return statement. /** @type {import('@builder.io/qwik-city').RequestHandler} */ export const onPost = async (requestEvent) => { const OPENAI_API_KEY = requestEvent.env.get('OPENAI_API_KEY') const formData = await requestEvent.parseBody() const prompt = formData.prompt const body = { model: 'gpt-3.5-turbo', messages: [{ role: 'user', content: prompt }] } const response = await fetch('https://api.openai.com/v1/chat/completions', { // ... fetch options }) const data = await response.json() const responseBody = data.choices[0].message.content requestEvent.send(new Response(responseBody)) } Refactor Client Logic Replacing the route actions has the unfortunate side effect of meaning we also can’t use the <Form> component anymore. We’ll have to use a regular HTML <form> element and recreate all the benefits we had before, including sending HTTP requests with JavaScript, tracking the loading state, and accessing the results. Let’s refactor our client side to support those features again. We can break these requirements down to needing two things, a JavaScript solution for submitting forms and a reactive state for managing loading states and results. I’ve covered submitting HTML forms with JavaScript in depth several times in the past: Make Beautifully Resilient Apps With Progressive Enhancement File Uploads for the Web (2): Upload Files With JavaScript Building Super Powered HTML Forms with JavaScript So today I’ll just share the snippet, which I put inside a utils.js file in the root of my project. This jsFormSubmit function accepts an HTMLFormElement then constructs a fetch request based on the form attributes and returns the resulting Promise: /** * @param {HTMLFormElement} form */ export function jsFormSubmit(form) { const url = new URL(form.action) const formData = new FormData(form) const searchParameters = new URLSearchParams(formData) /** @type {Parameters<typeof fetch>[1]} */ const fetchOptions = { method: form.method } if (form.method.toLowerCase() === 'post') { fetchOptions.body = form.enctype === 'multipart/form-data' ? formData : searchParameters } else { url.search = searchParameters } return fetch(url, fetchOptions) } This generic function can be used to submit any HTML form, so it’s handy to use in a submit event handler. Sweet! As for the reactive data, Qwik provides two options, useStore and useSignal. I prefer useStore, which allows us to create an object whose properties are reactive - meaning changes to the object’s properties will automatically be reflected wherever they are referenced in the UI. We can use useStore to create a “state” object in our component to track the loading state of the HTTP request as well as the text response. import { $, component$, useStore } from "@builder.io/qwik"; // other setup logic export default component$(() => { const state = useStore({ isLoading: false, text: '', }) // other component logic }) Next, we can update the template. Since we can no longer use the action object we had before, we can replace references from action.isRunning and action.value to state.isLoading and state.text, respectively (don’t ask me why I changed the property names). I’ll also add a “submit” event handler to the form called handleSbumit, which we’ll look at shortly. <main> <form method="post" preventdefault:submit onSubmit$={handleSubmit} > <div> <label for="prompt">Prompt</label> <textarea name="prompt" id="prompt"> Tell me a joke </textarea> </div> <button type="submit" aria-disabled={state.isLoading}> {state.isLoading ? 'One sec...' : 'Tell me'} </button> </form> {state.text && ( <article> <p>{state.text}</p> </article> )} </main> Note that the <form> does not explicitly provide an action attribute. By default, an HTML form will submit data to the current URL, so we only need to set the method to POST and submit this form to trigger the onPost middleware we defined earlier. Now, the last step to get this refactor working is defining handleSubmit. Just like we did in the previous post, we need to wrap an event handler inside Qwik’s $ function. Inside the event handler, we’ll want to clear out any previous data from state.text, set state.isLoading to true, then pass the form’s DOM node to our fancy jsFormSubmit function. This should submit the HTTP request for us. Once it comes back, we can update state.text with the response body, and return state.isLoading to false. const handleSubmit = $(async (event) => { state.text = '' state.isLoading = true /** @type {HTMLFormElement} */ const form = event.target const response = await jsFormSubmit(form) state.text = await response.text() state.isLoading = false }) OK! We should now have a client-side form that uses JavaScript to submit an HTTP request to the server while tracking the loading and response states, and updating the UI accordingly. That was a lot of work to get the same solution we had before but with fewer features. But the key benefit is we now have direct access to the platform primitives we need to support streaming. Enable Streaming on the Server Before we start streaming responses from OpenAI, I think it’s helpful to start with a very basic example to get a better grasp of streams. Streams allow us to send small chunks of data over time. So as an example, let’s print out some iconic David Bowie lyrics in tempo with the song, “Space Oddity." When we construct our Response object, instead of passing plain text, we’ll want to pass a stream. We’ll create the stream shortly, but here’s the idea: /** @type {import('@builder.io/qwik-city').RequestHandler} */ export const onPost = (requestEvent) => { requestEvent.send(new Response(stream)) } We’ll create a very rudimentary ReadableStream using the ReadableStream constructor and pass it as an optional parameter. This optional parameter can be an object with a start method that’s called when the stream is constructed. The start method is responsible for the steam’s logic and has access to the stream controller, which is used to send data and close the stream. const stream = new ReadableStream({ start(controller) { // Stream logic goes here } }) OK, let’s plan out that logic. We’ll have an array of song lyrics and a function to "sing" them (pass them to the stream). The sing function will take the first item in the array and pass that to the stream using the controller.enqueue() method. If it’s the last lyric in the list, we can close the stream with controller.close(). Otherwise, the sing method can call itself again after a short pause. const stream = new ReadableStream({ start(controller) { const lyrics = ['Ground', ' control', ' to major', ' Tom.'] function sing() { const lyric = lyrics.shift() controller.enqueue(lyric) if (lyrics.length < 1) { controller.close() } else { setTimeout(sing, 1000) } } sing() } }) So each second, for four seconds, this stream will send out the lyrics “Ground control to major Tom.” Slick! Because this stream will be used in the body of the Response, the connection will remain open for four seconds until the response completes. But the front end will have access to each chunk of data as it arrives, rather than waiting the full four seconds. This doesn’t speed up the total response time (in some cases, streams can increase response times), but it does allow for a faster-perceived response, and that makes a better user experience. Here’s what my code looks like: /** @type {import('@builder.io/qwik-city').RequestHandler} */ export const onPost: RequestHandler = async (requestEvent) => { const stream = new ReadableStream({ start(controller) { const lyrics = ['Ground', ' control', ' to major', ' Tom.'] function sing() { const lyric = lyrics.shift() controller.enqueue(lyric) if (lyrics.length < 1) { controller.close() } else { setTimeout(sing, 1000) } } sing() } }) requestEvent.send(new Response(stream)) } Unfortunately, as it stands right now, the client will still be waiting four seconds before seeing the entire response, and that’s because we weren’t expecting a streamed response. Let’s fix that. Enable Streaming on the Client Even when dealing with streams, the default browser behavior when receiving a response is to wait for it to complete. In order to get the behavior we want, we’ll need to use client-side JavaScript to make the request and process the streaming body of the response. We’ve already tackled that first part inside our handleSubmit function. Let’s start processing that response body. We can access the ReadableStream from the response body’s getReader() method. This stream will have its own read() method that we can use to access the next chunk of data, as well as the information if the response is done streaming or not. The only "gotcha" is that the data in each chunk doesn’t come in as text: it comes in as a Uint8Array, which is “an array of 8-bit unsigned integers.” It’s basically the representation of the binary data, and you don’t really need to understand any deeper than that unless you want to sound very smart at a party (or boring). The important thing to understand is that on their own, these data chunks aren’t very useful. To get something we can use, we’ll need to decode each chunk of data using a TextDecoder. Ok, that’s a lot of theory. Let’s break down the logic and then look at some code. When we get the response back, we need to: Grab the reader from the response body using response.body.getReader(). Setup a decoder using TextDecoder and a variable to track the streaming status. Process each chunk until the stream is complete, with a whileloop that does this: Grab the next chunk’s data and stream status. Decode the data and use it to update our app’s state.text. Update the streaming status variable, terminating the loop when complete. Update the loading state of the app by setting state.isLoading to false. The new handleSubmit function should look something like this: const handleSubmit = $(async (event) => { state.text = '' state.isLoading = true /** @type {HTMLFormElement} */ const form = event.target const response = await jsFormSubmit(form) // Parse streaming body const reader = response.body.getReader() const decoder = new TextDecoder() let isStillStreaming = true while(isStillStreaming) { const {value, done} = await reader.read() const chunkValue = decoder.decode(value) state.text += chunkValue isStillStreaming = !done } state.isLoading = false }) Now, when I submit the form, I see something like: “Groundcontrolto majorTom.” Hell yeah!!! OK, most of the work is down. Now we just need to replace our demo stream with the OpenAI response. Stream OpenAI Response Looking back at our original implementation, the first thing we need to do is modify the request to OpenAI to let them know that we would like a streaming response. We can do that by setting the stream property in the fetch payload to true. const body = { model: 'gpt-3.5-turbo', messages: [{ role: 'user', content: prompt }], stream: true } const response = await fetch('https://api.openai.com/v1/chat/completions', { method: 'post', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${OPENAI_API_KEY}`, }, body: JSON.stringify(body) }) UPDATE 11/15/2023: I used fetch and custom streams because at the time of writing, the openai module on NPM did not properly support streaming responses. That issue has been fixed, and I think a better solution would be to use that module and pipe their data through a TransformStream to send to the client. That version is not reflected here. Next, we could pipe the response from OpenAI directly to the client, but we might not want to do that. The data they send doesn’t really align with what we want to send to the client because it looks like this (two chunks, one with data, and one representing the end of the stream): data: {"id":"chatcmpl-4bJZRnslkje3289REHFEH9ej2","object":"chat.completion.chunk","created":1690319476,"model":"gpt-3.5-turbo-0613","choiced":[{"index":0,"delta":{"content":"Because"},"finish_reason":"stop"}]} data: [DONE] Instead, what we’ll do is create our own stream, similar to the David Bowie lyrics, that will do some setup, enqueue chunks of data into the stream, and close the stream. Let’s start with an outline: const stream = new ReadableStream({ async start(controller) { // Any setup before streaming // Send chunks of data // Close stream } }) Since we’re dealing with a streaming fetch response from OpenAI, a lot of the work we need to do here can actually be copied from the client-side stream handling. This part should look familiar: const reader = response.body.getReader() const decoder = new TextDecoder() let isStillStreaming = true while(isStillStreaming) { const {value, done} = await reader.read() const chunkValue = decoder.decode(value) // Here's where things will be different isStillStreaming = !done } This snippet was taken almost directly from the frontend stream processing example. The only difference is that we need to treat the data coming from OpenAI slightly differently. As we say, the chunks of data they send up will look something like "data: [JSON data or done]". Another gotcha is that every once in a while, they’ll actually slip in TWO of these data strings in a single streaming chunk. So here’s what I came up with for processing the data. Create a Regular Expression to grab the rest of the string after "data:". For the unlikely event there is more than one data string, use a while loop to process every match in the string. If the current matches the closing condition (“[DONE]“) close the stream. Otherwise, parse the data as JSON and enqueue the first piece of text from the list of options (json.choices[0].delta.content). Fall back to an empty string if none is present. Lastly, in order to move to the next match, if there is one, we can use RegExp.exec(). The logic is quite abstract without looking at the code, so here’s what the whole stream looks like now: const stream = new ReadableStream({ async start(controller) { // Do work before streaming const reader = response.body.getReader() const decoder = new TextDecoder() let isStillStreaming = true while(isStillStreaming) { const {value, done} = await reader.read() const chunkValue = decoder.decode(value) /** * Captures any string after the text `data: ` * @see https://regex101.com/r/R4QgmZ/1 */ const regex = /data:\s*(.*)/g let match = regex.exec(chunkValue) while (match !== null) { const payload = match[1] // Close stream if (payload === '[DONE]') { controller.close() break } else { try { const json = JSON.parse(payload) const text = json.choices[0].delta.content || '' // Send chunk of data controller.enqueue(text) match = regex.exec(chunkValue) } catch (error) { const nextChunk = await reader.read() const nextChunkValue = decoder.decode(nextChunk.value) match = regex.exec(chunkValue + nextChunkValue) } } } isStillStreaming = !done } } }) UPDATE 11/15/2023: I discovered that OpenAI API sometimes returns the JSON payload across two streams. So the solution is to use a try/catch block around the JSON.parse and in the case that it fails, reassign the match variable to the current chunk value plus the next chunk value. The code above has the updated snippet. Review That should be everything we need to get streaming working. Hopefully, it all makes sense and you got it working on your end. I think it’s a good idea to review the flow to make sure we’ve got it: The user submits the form, which gets intercepted and sent with JavaScript. This is necessary to process the stream when it returns. The request is received by the action handler which forwards the data to the OpenAI API along with the setting to return the response as a stream. The OpenAI response will be sent back as a stream of chunks, some of which contain JSON and the last one being “[DONE]“. Instead of passing the stream to the action response, we create a new stream to use in the response. Inside this stream, we process each chunk of data from the OpenAI response and convert it to something more useful before enqueuing it for the action response stream. When the OpenAI stream closes, we also close our action stream. The JavaScript handler on the client side will also process each chunk of data as it comes in and update the UI accordingly. Conclusion The app is working. It’s pretty cool. We covered a lot of interesting things today. Streams are very powerful, but also challenging and, especially when working within Qwik, there are a couple of little gotchas. However, because we focused on low-level fundamentals, these concepts should apply across any framework. As long as you have access to the platform and primitives like streams, requests, and response objects then this should work. That’s the beauty of fundamentals. I think we got a pretty decent application going now. The only problem is right now we’re using a generic text input and asking users to fill in the entire prompt themselves. In fact, they can put in whatever they want. We’ll want to fix that in a future post, but the next post is going to step away from code and focus on understanding how the AI tools actually work. I hope you’ve been enjoying this series and come back for the rest of it. Thank you so much for reading.
So I've been working on a project for a while to create a real-time, high-performance JavaScript Chart Library. This project uses quite an ambitious & novel tech stack including a large legacy codebase in C/C++ which is compiled to WebAssembly using Emscripten, targetting WebGL, and a TypeScript API wrapper allowing you to load the charts in JS without having to worry about the underlying Wasm. First Up, Why Use Wasm at All? WebAssembly is an exciting technology and offers performance benefits over JavaScript in many cases. Also, in this case, a legacy C++ codebase already handled much of the rendering for charts & graphs in OpenGL and needed only a little work to be able to target WebGL. It's fairly easy to compile existing C++ code into WebAssembly using Emscripten and all that remains is writing bindings to generate Typings and then your JavaScript API around the Wasm library to use it. During the development of the library we learned some interesting things about the WebAssembly memory model, how to avoid and debug memory leaks which I'll share below. JavaScript vs. WebAssembly Memory Model WebAssembly has a completely different memory model to JavaScript. While JavaScript has a garbage collector, which automatically cleans up the memory of variables that are no longer required, WebAssembly simply does not. An object or buffer declared in Wasm memory must be deleted by the caller, if not a memory leak will occur. How Memory Leaks Are Caused in JavaScript Memory leaks can occur in both JavaScript and WebAssembly and care and attention must be taken by the developer to ensure that memory is correctly cleaned up when using WebAssembly. Despite being a Garbage-Collected managed programming language, it’s still extremely easy to create a memory leak just in vanilla JavaScript. Here are a couple of ways that is possible to inadvertently leak memory in a JavaScript app: Arrow functions and closure can capture variables and keep them alive, so they cannot be deleted by the JavaScript garbage collector Callbacks or event listeners can capture a variable and keep it alive. Global variables or static variables stay alive for the lifetime of the application. Simply forgetting to use let or const can convert a variable to a global variable. Even detached DOM nodes can keep objects alive in JavaScript. Simply removing a node from the DOM but keeping a variable to it can prevent the node and its children from being collected. How Memory Leaks Are Caused in WebAssembly Wasm has a separate heap from the JavaScript virtual machine. This memory is allocated in the browser, and reserved from the host OS. When you allocate memory in Wasm, the Wasm heap is grown, and a range of addresses are reserved. When you delete memory in Wasm, the heap does not shrink and memory is not returned to the host OS. Instead, the memory is simply marked as deleted or available. This means it can be re-used by future allocations. To cause a memory leak in WebAssembly you simply need to allocate memory and forget to delete it. Since there is no automatic garbage collection, finalization, or marking of memory as no longer needed, it must come from the user. All WebAssembly types exported by the compiler Emscripten have a function .delete() on objects that use Wasm memory. This needs to be called when the object is no longer required. Here's a quick example: Example: Leaking Memory in Wasm Assuming you have a type declared in C++ like this: C++ // person.cpp #include <string> class Person { public: // C++ Constructor Person(std::string name, int age) : name(name), age(age) {} // C++ Destructoe ~Person() {} std::string getName() { return name; } int getAge() { return age; } private: std::string name; int age; }; And compile and export the type using Emscripten like this CMake emcc person.cpp -o person.js -s EXPORTED_FUNCTIONS="['_createPerson', '_deletePerson', '_getName', '_getAge']" -s MODULARIZE=1 You can now instantiate, use, and delete the type in JavaScript like this: JavaScript const Module = require('./person.js'); // Include the generated JavaScript interface Module.onRuntimeInitialized = () => { // Instantiate a Person object const person = new Module.Person('John Doe', 30); console.log('Person object created:', person); // Access and print properties console.log('Name:', person.getName()); console.log('Age:', person.getAge()); // Delete the Person object (calls the C++ destructor) person.delete(); }; Forgetting to call, however, causes a Wasm memory leak. The memory in the browser will grow and not shrink. Detecting Memory Leaks in WebAssembly Applications Because a memory leak is catastrophic to an application, we had to ensure that our code did not leak memory, but also the user code (those consuming and using our [JavaScript Chart Library](https://www.scichart.com/javascript-chart-features) in their applications) did not leak memory. To solve this we developed our in-house memory debugging tools. This is implemented as an object registry which is a Map<string, TObjectEntryInfo> of all objects undeleted and uncollected where TObjectEntryInfo is a type which stores WeakRef to the object. Using a JavaScript proxy technique we were able to intercept calls to new/delete on all WebAssembly types. Each time an object was instantiated, we added it to the objectRegistry and each time it was deleted, we removed it from the objectRegistry. Now you can run your application, enable the memory debugging tools, and output specific snapshots of your application state. Here's an example of the tool's output. First, enable the MemoryUsageHelper (memory debugging tools) JavaScript import { MemoryUsageHelper} from "scichart"; MemoryUsageHelper.isMemoryUsageDebugEnabled = true; This automatically tracks all the types in our library, but you can track any arbitrary object in your application by calling register and unregister like this: JavaScript // Register an arbitrary object MemoryUsageHelper.register(yourObject, "identifier"); // Unregister an arbitrary object MemoryUsageHelper.unregister("identifier"); Later, at a specific point output a snapshot by calling this function: JavaScript MemoryUsageHelper.objectRegistry.log(); This outputs to the console all the objects which have not been deleted, or uncollected How To Use This Output Objects that are in the undeletedObjectsMap may either be still alive or perhaps you've forgotten to delete them. In this case, the resolution is simple. Call .delete() on the object when you are done with it. Objects in uncollectedObjectsMap have not yet been garbage collected. This could be a traditional JS memory leak (which also affects Wasm memory) so check for global variables, closure, callbacks, and other causes of traditional JS memory leaks. Objects in deletedNotCollected and collectedNotDeleted identify possible leaks where an object was collected by the javascript garbage collector but not deleted (and vice versa). MemoryUsageHelper Wasm Memory leak debugging tools are part of SciChart.js, available on npm with a free community license.It can be used in WebAssembly applications or JavaScript applications to track memory usage.
John Vester
Staff Engineer,
Marqeta
Justin Albano
Software Engineer,
IBM