Developer Experience: Demand to support engineering teams has risen, and there is a shift from traditional DevOps to workflow improvements.
The future of AI-driven development. Join the discussion around insights on low code's and AI's roles in building mission-critical apps.
A Comprehensive Guide to Generative AI Training
Step-by-Step Reasoning Can Fix Madman Logic in Vision AI
Developer Experience
With tech stacks becoming increasingly diverse and AI and automation continuing to take over everyday tasks and manual workflows, the tech industry at large is experiencing a heightened demand to support engineering teams. As a result, the developer experience is changing faster than organizations can consciously maintain.We can no longer rely on DevOps practices or tooling alone — there is even greater power recognized in improving workflows, investing in infrastructure, and advocating for developers' needs. This nuanced approach brings developer experience to the forefront, where devs can begin to regain control over their software systems, teams, and processes.We are happy to introduce DZone's first-ever Developer Experience Trend Report, which assesses where the developer experience stands today, including team productivity, process satisfaction, infrastructure, and platform engineering. Taking all perspectives, technologies, and methodologies into account, we share our research and industry experts' perspectives on what it means to effectively advocate for developers while simultaneously balancing quality and efficiency. Come along with us as we explore this exciting chapter in developer culture.
Getting Started With Agentic AI
Java Application Containerization and Deployment
Hey, DZone Community! We have an exciting year ahead of research for our beloved Trend Reports. And once again, we are asking for your insights and expertise (anonymously if you choose) — readers just like you drive the content we cover in our Trend Reports. Check out the details for our research survey below. Comic by Daniel Stori Generative AI Research Generative AI is revolutionizing industries, and software development is no exception. At DZone, we're diving deep into how GenAI models, algorithms, and implementation strategies are reshaping the way we write code and build software. Take our short research survey ( ~10 minutes) to contribute to our latest findings. We're exploring key topics, including: Embracing generative AI (or not)Multimodal AIThe influence of LLMsIntelligent searchEmerging tech And don't forget to enter the raffle for a chance to win an e-gift card of your choice! Join the GenAI Research Over the coming month, we will compile and analyze data from hundreds of respondents; results and observations will be featured in the "Key Research Findings" of our Trend Reports. Your responses help inform the narrative of our Trend Reports, so we truly cannot do this without you. Stay tuned for each report's launch and see how your insights align with the larger DZone Community. We thank you in advance for your help! —The DZone Content and Community team
Introduction to RAG and Quarkus Retrieval-augmented generation (RAG) is a technique that enhances AI-generated responses by retrieving relevant information from a knowledge source. In this tutorial, we’ll build a simple RAG-powered application using Java and Quarkus (a Kubernetes-native Java framework). Perfect for Java beginners! Why Quarkus? Quarkus provides multiple LangChain4j extensions to simplify AI application development, especially RAG implementation by providing an Easy RAG module for building end-to-end RAG pipelines. Easy RAG acts as a bridge, connecting the retrieval components (like your document source) with the LLM interaction within the LangChain4j framework. Instead of manually orchestrating the retrieval, context injection, and LLM call, easy RAG handles these steps behind the scenes, reducing the amount of code you need to write. This abstraction allows you to focus on defining your data sources and crafting effective prompts, while easy RAG takes care of the more technical details of the RAG workflow. Within a Quarkus application, this means you can quickly set up a RAG endpoint by simply configuring your document source and leveraging easy RAG to retrieve and query. This tight integration with LangChain4j also means you still have access to the more advanced features of LangChain4j if you need to customize or extend your RAG pipeline beyond what easy RAG provides out of the box. Essentially, easy RAG significantly lowers the barrier to entry for building RAG applications in a Quarkus environment, allowing Java developers to rapidly prototype and deploy solutions without getting bogged down in the lower-level implementation details. It provides a convenient and efficient way to leverage the power of RAG within the already productive Quarkus and LangChain4j ecosystem. Step 1: Set Up Your Quarkus Project Create a new Quarkus project using the Maven command: Shell mvn io.quarkus:quarkus-maven-plugin:3.18.4:create \ -DprojectGroupId=com.devzone \ -DprojectArtifactId=quarkus-rag-demo \ -Dextensions='langchain4j-openai, langchain4j-easy-rag, websockets-next' This generates a project with a simple AI bot with easy RAG integration. Find the solution project here. The AI service refers to Open AI by default. You can replace it with local Ollama using the quarkus-langchain4j-ollama extension rather than quarkus-langchain4j-openai. Step 2: Explore the Generated AI Service Open the Bot.java file in the src/main/java/com/devzone folder. The code should look like this: Java @RegisterAiService // no need to declare a retrieval augmentor here, it is automatically generated and discovered public interface Bot { @SystemMessage(""" You are an AI named Bob answering questions about financial products. Your response must be polite, use the same language as the question, and be relevant to the question. When you don't know, respond that you don't know the answer and the bank will contact the customer directly. """) String chat(@UserMessage String question); } @RegisterAiService registers the AI service as an interface.@SystemMessage defines the initial instruction and scope that will be sent to the LLM as the first message.@UserMessage defines prompts (e.g., user input) and usually combines requests and expected responses’ format. You can change the definitions regarding your LLMs and prompt engineering practices. Step 3: Lean How to Integrate Easy RAG Into AI Service When the quarkus-langchain4j-easy-rag extension is added to the Quarkus project, the only steps required to ingest documents into an embedding store are to include a dependency for an embedding model and specify a single configuration property, quarkus.langchain4j.easy-rag.path, which points to a local directory containing your documents. During application startup, Quarkus automatically scans all files within the specified directory and ingests them into an in-memory embedding store, eliminating the need for manual setup or complex configuration. Open the application.properties file in the src/main/resources folder. You should find the quarkus.langchain4j.easy-rag.path=easy-rag-catalog property. Navigate to the easy-rag-catalog folder in the project root directory. You should find four documents generated with different file formats such as txt, odt, and pdf files. Shell . |____retirement-money-market.txt |____elite-money-market-account.odt |____smart-checking-account.pdf |____standard-saving-account.txt This approach significantly reduces the overhead typically associated with implementing RAG pipelines, allowing developers to focus on building their application logic rather than managing the intricacies of document ingestion and embedding storage. By leveraging the quarkus-langchain4j-easy-rag extension, developers can quickly enable their applications to retrieve and utilize relevant information from documents, enhancing the capabilities of AI-driven features such as chatbots, question-answering systems, or intelligent search functionalities. The extension’s seamless integration with Quarkus ensures a smooth development experience, aligning with Quarkus’s philosophy of making advanced technologies accessible and easy to use in cloud-native environments. Step 4: Test Your Application Using Quarkus Dev Mode Before testing the AI application, you need to set your OPEN_API_KEY in the application.properties file: quarkus.langchain4j.openai.api-key=YOUR_OPENAI_API_KEY Start the Quarkus dev mode to test the AI application using the following Maven command: ./mvnw quarkus:dev The output should look like this: Shell Listening for transport dt_socket at address: 55962 __ ____ __ _____ ___ __ ____ ______ --/ __ \/ / / / _ | / _ \/ //_/ / / / __/ -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \ --\___\_\____/_/ |_/_/|_/_/|_|\____/___/ INFO [io.qua.lan.eas.run.EasyRagRecorder] (Quarkus Main Thread) Reading embeddings from /Users/danieloh/Downloads/quarkus-rag-demo/easy-rag-embeddings.json INFO [io.quarkus] (Quarkus Main Thread) quarkus-rag-demo 1.0.0-SNAPSHOT on JVM (powered by Quarkus 3.18.4) started in 2.338s. Listening on: http://localhost:8080 INFO [io.quarkus] (Quarkus Main Thread) Profile dev activated. Live Coding activated. INFO [io.quarkus] (Quarkus Main Thread) Installed features: [awt, cdi, langchain4j, langchain4j-easy-rag, langchain4j-openai, langchain4j-websockets-next, poi, qute, rest-client, rest-client-jackson, smallrye-context-propagation, smallrye-openapi, swagger-ui, vertx, websockets-next] -- Tests paused Press [e] to edit command line args (currently ''), [r] to resume testing, [o] Toggle test output, [:] for the terminal, [h] for more options> To access the Quarkus Dev UI, press “D” on the terminal where the Quarkus dev mode is running or access http://localhost:8080/q/dev-ui/ directly on a web browser. Select “Chat” to access the experimental prompt page. This is beneficial for developers to verify a new AI service quickly without implementing REST APIs or front-end applications. Input (prompt) the following text to verify the RAG functionality: Tell me about the benefits of a "Standard savings account." Send the prompt to the Open AI. The AI model should be the GPT-4o mini by default. The prompt will be improved by ingesting the document (e.g., standard-saving-account.txt) before the user input message is sent to the LLM. A few seconds after your request is processed, the response will be sent back to you with the following answer: Enhancements for Real-World Use Use a vector database. Replace the in-memory list with Qdrant or Pinecone for scalable document retrieval.Add AI models. Integrate Hugging Face transformers for advanced text generation.Error handling. Improve robustness with retry logic and input validation. Conclusion You’ve built a basic RAG application with Java and Quarkus! This example lays the groundwork for smarter apps that combine retrieval and generation. Experiment with larger datasets or AI models to level up!
Let’s say you’re building a blog website. On the homepage, you need to display a list of the 10 most recent posts, with pagination allowing users to view older posts. When a user clicks on a post, they should see its content along with metadata, such as the author’s name and the creation date. Each post also supports comments, so at the bottom of a post, you’ll display the five earliest comments with an option to load more. Additionally, blog posts include tags, which will be shown alongside the post’s metadata. You’ll also have a dedicated tags page that lists all the tags used across all posts. When a user clicks on a tag, they should see a list of blog posts associated with it. List of posts / blog content / tags list Now, let’s focus on the backend, specifically the database design. Choice of Database When visualizing our data, we notice that comments introduce a slight hierarchical structure. Each blog post has its own set of comments (a one-to-many relationship), and these comments exist independently of the blog post’s schema. This hierarchical nature is best represented using JSON documents, which also makes a document database the most suitable choice for our problem. A document database is a type of non-relational database where each record (or document) is a self-contained entity with its own flexible schema. This makes document databases ideal for content management systems like blogging applications, where each piece of content can store complex data structures, adapt to changes easily, and evolve over time. Examples of document database providers include Firebase Firestore, MongoDB, and Amazon DocumentDB. Document Database Terminologies DocumentA JSON-like data structure that represents a single record (similar to a row in SQL)CollectionA grouping of documents (similar to a table in SQL).Sub-CollectionA nested collection within a document, used for one-to-many relationships. Only some document database providers have the sub-collection feature.Primary KeyA unique identifier for a document.IndexA data structure that speeds up queries by avoiding full collection scans.Single-Field IndexIndex on a single field, e.g., createdDate for sorting recent documents.Composite Index (Compound Index)Index on multiple fields for efficient filtering and sorting.Array IndexingSpecial indexing to allow searching inside array fields. Database Schema Firstly, we will create a collection for storing blog documents. The structure of a blog document will be like: JSON { "blogId": "unique-blog-id", "title": "Designing a blog application using document database", "content": "Blog Content...", "author": "Jack Sparrow", "createdDate": "2025-02-09T12:00:00Z", "tags": ["programming", "database"], } The createdDate is of type timestamp. The tags are of type array field. And rest are string. Indexes To efficiently query our data based on our requirements, we will create indexes on: createdDate: This will allow us to fetch recent N blogs.tags: This will help us fetch blogs containing a given tag. Comments Each blog will have a list of comments. Since comments will be complex data types, it’s best to have them in their own collection, referencing its blog. Now, some database providers, like Firestore, allow the creation of sub-collections. That means you can have a comment collection under a blog document. You will reference the comments collection with something like db.<blogId>.comments. Whereas in others, you will create a separate comments collection altogether, which will have attributes like: JSON { "commentId": "unique-id", // primary key "blogId": "blog-id", "createdDate": "2025-02-09T12:00:00Z" } You will create a composite-key index on blogId and createdDate, such that you can get all recent comments given a blogId. Queries Here are examples of how we can perform different types of queries. The exact syntax will vary depending on the SDK provided by your chosen database provider. The queries below are written in a syntax similar to Firebase Firestore’s TypeScript SDK. 1. Fetch the Recent N Blogs JavaScript db.collection("blogs") .orderBy("createdDate", "desc") .limit(N); 2. Fetch the Next N Blogs JavaScript db.collection("blogs") .orderBy("createdDate", "desc") .startAfter(lastFetchedCreatedDate) // Cursor for next page .limit(N); MongoDB has a different syntax involving using a greaterThan ($gt) operator. 3. Fetch a Blog Given Its ID JavaScript db.collection("blogs") .doc(blog-id) 4. Fetch the First N Comments for a Given Blog Using sub-collection: JavaScript db.collection("blogs") .doc(blog-id) .collection("comments") .orderBy("createdDate", "asc") .startAfter(lastFetchedCreatedDate) .limit(N); Using a separate collection for comments: JavaScript db.comments.find({ blogId: blog-id }) .sort({ createdDate: 1 }) // Oldest first .limit(10); 5. Given a Tag, Fetch All Blogs That Have That Tag JavaScript db.collection("blogs") .where("tags", "array-contains", tag) .orderBy("createdDate", "desc"); 6. Get a List of All Unique Tags Used Across All Posts The best approach to do this efficiently is to have a separate collection for tags. While creating a new blog post, you will check if the tag already exists in the tags collection. If it doesn’t, simply add it. JavaScript for (tag in tags) { tagDocument = db.collection("tags").doc(tag).get() if (tagDocument does not exist) { // create tag entry } } Lastly, fetch all tags from the tags collection. JavaScript db.collection("tags") .get() Conclusion And that concludes our article. You are all set to build the most resilient and feature-rich blogging or content management application. By leveraging document databases, you ensure flexibility, scalability, and efficient querying, making it easier to handle evolving content structures.
In programming, object mutation implies that an object's state or data is mutated after creation. In other words, the operation that changes the attributes of an object in JavaScript is known as object mutation. Object mutation alters an object's values directly, making it challenging, particularly in applications where multiple operations may try to read from or write to an object simultaneously. This article presents a discussion on object mutation in JavaScript with relevant code examples wherever necessary. Data Types in JavaScript Data types denote the type of data a variable or an object can hold. JavaScript supports two distinct categories of data types: primitive and user-defined or reference types. Primitive Data Types In JavaScript, all primitive data types are immutable by nature, i.e., you cannot alter them after they have been created. Numbers, Booleans, Strings, Bigints, Undefineds, Nulls, Symbols, and Objects are examples of primitive types. User-Defined or Reference Data Types User-defined data types or reference data types are objects created using primitive types or a combination of primitive and user-defined types. Typical examples of user-defined or reference types are objects and arrays. How Variables Are Assigned and Reassigned in JavaScript When you assign a primitive type variable to a primitive type variable, the two variables hold similar values, but they are stored in different storage locations. For example, assume that you have two variables varA and varB and you assign one variable to another in the following way: JavaScript var varA = 100; var varB = varA; console.log(varB); When you execute the preceding piece of code, the number 100 will be displayed on the console. Now, you change the values of one of the two variables (say varB) as shown here. JavaScript var varA = 100; var varB = varA; varB = 500; console.log(varA); Note how the value of the variable varB has been changed to 500. When you print the value of varA, it will still display 100. This is because these variables varA and varB are stored in two different memory locations. So, if you change any of them, the new or changed value will not reflect on the other variables. What Is Object Mutation in JavaScript? In JavaScript, the data type of an object can belong to any of the two categories: primitive or non-primitive. While primitive types are immutable, i.e., you cannot change them after creating them, you can alter non-primitive types, i.e., objects and arrays. Objects always allow their values to be changed. Hence, you can change the state of fields for a mutable type without creating a new instance. Object mutations can create several problems, such as the following: Mutated objects can often lead to race conditions because of concurrency and thread-safety issuesMutation can introduce complexities in the source code because of predictability and thread safety issuesMutation can often lead to bugs that can be difficult to identify in the application's source codeMutation makes testing and debugging the code difficult because tracking code that leverages mutation becomes a challenge Code Examples That Demonstrate Object Mutation Object mutation can occur in any of the following scenarios: Adding, editing, or removing propertiesUsing methods that can exhibit mutation When you alter the properties of an object, either directly or indirectly, you are essentially mutating the object. The following code snippet shows how you can mutate an object by changing its property. JavaScript const author = { id: 1, name: "Joydip Kanjilal"}; author.id = 2; author.city = "Hyderabad, INDIA"; console.log(author); In the preceding piece of code, we create an object named author that contains two properties, namely, id and name. While the id property is used to store the id of the author record, the name property stores the name of the author. Note how we mutate the author object by altering the value pertaining to the id property. Next, we add a new property, named city, to the author object and assign a value to the property. When you run the preceding piece of code, the properties and their values of the author object will be displayed as shown below: JavaScript { name: 'Joydip Kanjilal', city: 'Hyderabad, INDIA' } When you pass an object to a function or assign it to a variable in JavaScript, you're essentially passing the reference to the object and not a copy of it. This implies that any change you make to the new object created by passing an object or assigning it to the variable will apply to all references of the actual object. Consider the following piece of code that shows how you can create an object in JavaScript and then assign it to a variable. JavaScript const objA = { id: 1, name: 'Joydip Kanjilal', city: 'Hyderabad, INDIA', pincode: 500089 } const objB = objA; objB.pincode = 500034; console.log(objA); In the preceding piece of code, the object objA is assigned to objB, and the value of the pincode property of objA is changed, i.e., the object objA is mutated. When you execute the program, the following data will be displayed. JavaScript { id: 1, name: 'Joydip Kanjilal', city: 'Hyderabad, INDIA', pincode: 500034 } Note that the value of the pincode property has been changed. Preventing Object Mutation in JavaScript In JavaScript, you can prevent mutation in several ways, such as the following: Using object cloning by taking advantage of the Object.assign() method or the spread operator (...)Using the Object.seal() method to prevent adding or deleting properties of an objectUsing the Object.freeze() method to prevent adding, editing, or deleting properties of an object Using Cloning Refer to the following piece of code that shows how you can clone an object in JavaScript using the spread operator. JavaScript let originalObj = { x: 10, y: 100 }; let clonedObj = { ...originalObj }; Here, the name of the cloned object is clonedObj, and it is identical to the original object named originalObj. So, if you display the values of the two properties of these two objects, the results will be the same. Now, change the value of one of the properties of the cloned object named, clonedObj to your desired value, as shown in the piece of code given below. Plain Text clonedObj.x = 50; Now, write the following piece of code to display the value of the property named x pertaining to the two objects originalObj and clonedObj. Plain Text console.log(originalObj.x); console.log(clonedObj.x); When you run the program, you'll observe that the value of the property x in the original object is unchanged. The values will be displayed at the console as shown below: Plain Text 10 50 Using the Object.freeze() Method The Object.freeze() method can make an object immutable by preventing any alterations to any of its properties. JavaScript const author = { id: 1, name: "Joydip Kanjilal", city: "Hyderabad", state: "Telengana", country: "India", pincode: 500089}; Object.freeze(author); author.city = "Bangalore"; author.state = "Karnataka"; author.pincode = 560010; console.log(author); When you execute the preceding piece of code, the results will be similar to this: JavaScript { id: 1, name: 'Joydip Kanjilal', city: 'Hyderabad', state: 'Telangana', country: 'India', pincode: 500089 } As you can see from the output, even if you’ve assigned values to the properties city and state, and pincode, there is no effect. So, no changes have been made to the data contained in any of the properties of the object. Using the Object.seal() Method You can also use the Object.seal() method to prevent object mutation in JavaScript. This method would enable you to alter the values of existing properties, but you cannot modify or delete any of the properties of the Object. The following code example illustrates this: JavaScript const author = { id: 1, name: "Joydip Kanjilal", city: "Hyderabad", state: "Telangana", country: "India", pincode: 500089}; Object.seal(author); author.city = "Bangalore"; author.state = "Karnataka"; author.pincode = 560005; author.booksauthored = 3; console.log(author); In the preceding code snippet, while modifications to the properties of the object named author will be allowed, neither addition nor deletion of the object's properties will be allowed. When you run the program, you'll see that the values of the properties modified are reflected in the result, but the statements that add or delete properties are ignored. Here's how the output would look like at the console: JavaScript { id: 1, name: 'Joydip Kanjilal', city: 'Bangalore', state: 'Karnataka', country: 'India', pincode: 560005 } Using the Object.defineProperty() method You can also leverage the Object.defineProperty() method in JavaScript to control the mutability of an object's individual properties. The following code snippet shows how you can use this method to disallow alterations to the value contained in a property whose mutability is restricted. JavaScript const author = { id: 1, name: "Joydip Kanjilal"}; Object.defineProperty(author, "booksauthored", { value: 3, writable: false, }); author.booksauthored = 5; console.log(author.booksauthored); When you execute the preceding piece of code, you’ll see that the number 3 is displayed on the console. Key Takeaways JavaScript categorizes object types into two distinct categories: primitives (mutable) and objects (immutable).The term object mutation refers to the operations that alter or change an object after it has been created. While primitive values such as number, etc., cannot be altered, you can always change objects after they have been created. Since strings in JavaScript are immutable, you cannot alter them once they have been created.Although mutation by itself is not that bad, you should manage it carefully to reduce bugs in your applications.You can reduce or eliminate mutation in JavaScript by following the recommended practices and leveraging immutable data structures.
DZone events bring together industry leaders, innovators, and peers to explore the latest trends, share insights, and tackle industry challenges. From Virtual Roundtables to Fireside Chats, our events cover a wide range of topics, each tailored to provide you, our DZone audience, with practical knowledge, meaningful discussions, and support for your professional growth. DZone Events Happening Soon Below, you’ll find upcoming events that you won't want to miss. Modernizing Enterprise Java Applications: Jakarta EE, Spring Boot, and AI Integration Date: February 25, 2025Time: 1:00 PM ET Register for Free! Unlock the potential of AI integration in your enterprise Java applications with our upcoming webinar! Join Payara and DZone to explore how to enhance your Spring Boot and Jakarta EE systems using generative AI tools like Spring AI and REST client patterns. What to Consider When Building an IDP Date: March 4, 2025Time: 1:00 PM ET Register for Free! Is your development team bogged down by manual tasks and “TicketOps”? Internal Developer Portals (IDPs) streamline onboarding, automate workflows, and enhance productivity—but should you build or buy? Join Harness and DZone for a webinar to explore key IDP capabilities, compare Backstage vs. managed solutions, and learn how to drive adoption while balancing cost and flexibility. DevOps for Oracle Applications with FlexDeploy: Automation nd Compliance Made Easy Date: March 11, 2025Time: 1:00 PM ET Register for Free! Join Flexagon and DZone as Flexagon's CEO unveils how FlexDeploy is helping organizations future-proof their DevOps strategy for Oracle Applications and Infrastructure. Explore innovations for automation through compliance, along with real-world success stories from companies who have adopted FlexDeploy. Make AI Your App Development Advantage: Learn Why and How Date: March 12, 2025Time: 10:00 AM ET Register for Free! The future of app development is here, and AI is leading the charge. Join Outsystems and DZone, on March 12th at 10am ET, for an exclusive Webinar with Luis Blando, CPTO of OutSystems, and John Rymer, industry analyst at Analysis.Tech, as they discuss how AI and low-code are revolutionizing development.You will also hear from David Gilkey, Leader of Solution Architecture, Americas East at OutSystems, and Roy van de Kerkhof, Director at NovioQ. This session will give you the tools and knowledge you need to accelerate your development and stay ahead of the curve in the ever-evolving tech landscape. Developer Experience: The Coalescence of Developer Productivity, Process Satisfaction, and Platform Engineering Date: March 12, 2025Time: 1:00 PM ET Register for Free! Explore the future of developer experience at DZone’s Virtual Roundtable, where a panel will dive into key insights from the 2025 Developer Experience Trend Report. Discover how AI, automation, and developer-centric strategies are shaping workflows, productivity, and satisfaction. Don’t miss this opportunity to connect with industry experts and peers shaping the next chapter of software development. Unpacking the 2025 Developer Experience Trends Report: Insights, Gaps, and Putting it into Action Date: March 19, 2025Time: 1:00 PM ET Register for Free! We’ve just seen the 2025 Developer Experience Trends Report from DZone, and while it shines a light on important themes like platform engineering, developer advocacy, and productivity metrics, there are some key gaps that deserve attention. Join Cortex Co-founders Anish Dhar and Ganesh Datta for a special webinar, hosted in partnership with DZone, where they’ll dive into what the report gets right—and challenge the assumptions shaping the DevEx conversation. Their take? Developer experience is grounded in clear ownership. Without ownership clarity, teams face accountability challenges, cognitive overload, and inconsistent standards, ultimately hampering productivity. Don’t miss this deep dive into the trends shaping your team’s future. What's Next? DZone has more in store! Stay tuned for announcements about upcoming Webinars, Virtual Roundtables, Fireside Chats, and other developer-focused events. Whether you’re looking to sharpen your skills, explore new tools, or connect with industry leaders, there’s always something exciting on the horizon. Don’t miss out — save this article and check back often for updates!
Advancing in a software engineering career can be a daunting challenge. Many engineers find themselves stuck, unsure of what steps to take to move from a mid-level role to senior positions such as staff, principal, or distinguished engineer. While technical knowledge is essential, the real differentiators are the skills that allow engineers to build scalable, maintainable, and collaborative software solutions. Open source provides an ideal platform for mastering these crucial skills. It forces engineers to write clean, maintainable code, work within distributed teams, document effectively, and apply industry best practices that lead to software longevity. Some of the most successful open-source projects have been maintained for decades, demonstrating principles that can be used in any professional setting. The reasons and methods for participating in open-source projects were explored in a previous article: Why and How to Participate in Open Source Projects. This article will focus on the hard skills gained through open-source contributions and how they can accelerate a software engineering career. Now, let's explore six key categories of skills that open source can help develop, enabling career advancement. 1. Software Architecture Software architecture is the foundation of any successful project. Open source forces engineers to think critically about design choices because the code must be understandable, maintainable, and scalable by contributors across the globe. When contributing to open-source projects—especially those under organizations like the Eclipse Foundation or Apache Foundation—it is necessary to clearly define the scope, structure, and integration points of the software. This mirrors the architecture work done in large companies, helping to build real-world experience that is directly transferable to enterprise systems. Engaging in open source provides the opportunity to design systems that are: Modular and extensibleWell-documented and maintainableScalable and adaptable to change 2. Software Design Beyond architecture, software design ensures that the code written is both functional and efficient. Open source encourages simplicity and pragmatism—every decision is driven by necessity rather than an arbitrary desire to implement complex patterns. In open source, design decisions are: Context-driven: Code is written to serve a specific purpose.Focused on usability: APIs and libraries must be easy to understand and use.Iterative: Design evolves based on real-world feedback and contributions. Rather than adding unnecessary layers and abstractions, open-source projects emphasize clarity and efficiency, a mindset that can help prevent over-engineering in enterprise projects. 3. Documentation A common misconception is that documentation is secondary to writing code. In reality, documentation is a core part of software engineering—and open source demonstrates this principle exceptionally well. Successful open-source projects rely on clear documentation to onboard new contributors. This includes: README files that explain the purpose and usage of a projectAPI documentation for developersDesign guidelines and architectural decisions Improving documentation skills makes work more accessible to others and enables scalability within teams. Companies value engineers who can communicate ideas clearly, making documentation a crucial skill for career advancement. 4. Testing Open-source projects rely on robust testing strategies to ensure code quality and maintainability. Unlike private projects, where tests may be overlooked, open-source software must be reliable enough for anyone to use and extend. By contributing to open source, it is possible to learn how to: Write unit tests, integration tests, and end-to-end testsUse testing frameworks effectivelyAdopt test-driven development (TDD) to improve code quality Testing ensures predictability and stability, making it easier to evolve software over time without introducing breaking changes. 5. Persistence and Data Management Data storage and retrieval are fundamental aspects of software engineering. Open source projects often interact with multiple databases, caching mechanisms, and distributed storage systems. By participating in open source, exposure to various persistence strategies is gained, including: Relational databases (PostgreSQL, MySQL)NoSQL databases (MongoDB, Cassandra)Caching solutions (Redis, Memcached)Hybrid and new SQL approaches Understanding these technologies and their trade-offs helps make informed decisions about handling data efficiently in software projects. 6. Leadership and Communication Technical skills alone won’t make someone a staff engineer or a principal engineer—leadership and communication skills are also essential. Open source provides a unique opportunity to: Collaborate with developers from different backgroundsReview and provide constructive feedback on code contributionsAdvocate for design decisions and improvementsLead discussions on project roadmaps and features If the goal is to influence technical direction, participating in open source teaches how to communicate effectively, defend ideas with evidence, and lead technical initiatives. Becoming an Ultimate Engineer The ultimate engineer understands the context of software development, fights for simplicity, and embraces the six principles above to create impactful software. Open source is one of the best ways to develop these skills in a real-world setting. By incorporating open-source techniques into daily work, engineers can: Build a strong portfolio of contributionsDevelop a deeper understanding of software design and architectureImprove documentation and testing practicesGain expertise in data persistenceEnhance leadership and communication skills A book titled The Ultimate Engineer provides further insights into these six categories and explains how to apply open-source techniques to accelerate career growth. More details can be found here: The Ultimate Engineer. Conclusion Open source is not just about writing code for free—it’s about learning, growing, and making a lasting impact in the industry. Integrating open-source methodologies into daily work improves software engineering skills and positions engineers for career advancement, whether the goal is to become a staff engineer, principal engineer, or even a distinguished fellow. Start today—find an open-source project, contribute, and take your engineering career to the next level!
Modern software applications often need to support multiple frontend UI like Web, Android, IOS, TV, and VR, each with unique requirements. Traditionally, developers have dependent on a single backend to serve all clients. However, the complexity of serving different frontends needs using a monolithic backend can result in performance bottlenecks, complicated APIs, and unnecessary data interactions. The Backend for Frontend (BFF) architecture helps answer these challenges by creating a dedicated back-end service for each frontend type. Each BFF is dedicated to a specific UI kind, improving performance, UX, and overall system stability and maintainability. A General-Purpose API Backend (Traditional) If different UIs make the same requests, a general-purpose API can work well. However, the mobile or TV experience often differs significantly from a desktop web experience. First, mobile devices have distinct constraints; less screen space limits how much data you can show, and multiple server connections can drain the battery of the device and also impact data (on LTE). Next, mobile API calls differ from desktop API calls. For example, in a traditional Netflix scenario, a desktop app might let users browse movies and shows, buy movies online, and show a lot of information about the movies and shows. On mobile, the features are very limited. As we’ve developed more mobile applications, it's clear that people interact with devices differently, requiring us to expose different capabilities or features. In general, mobile devices make fewer requests and display less data compared to desktop apps. This results in additional features in the API backend to support mobile interfaces. A general-purpose API backend often ends up taking on many responsibilities, which will result in creating a dedicated team to manage the code base and fix bugs. This can lead to increased use of budget, complex team structure, and front-end teams required to coordinate with this separate team to implement changes. This API team has to prioritize requests from various client teams, while also working on integration with downstream APIs. Introducing the Backend For Frontend (BFF) One solution to the traditional general-purpose API issue is to use a dedicated backend for each UI or application type, also known as Backend For Frontend (BFF). Conceptually, the user-facing application has two parts: the client-side application and the server-side component. The BFF is closely aligned with a specific user experience and is typically managed by the same team responsible for the user interface. This makes it easier to tailor and adjust the API to meet the needs of the UI, while also streamlining the release process for both the client and server components. A BFF is only focused on a single user interface, allowing it to be smaller and more targeted in its functionality. How Many BFFs Should We Create? When delivering similar user experiences across different platforms like mobile, TV, desktop, web, AR, and VR, having a separate BFF for each type of client is preferred. For example, both the Android and iOS versions of an app share the same BFF. For all TV clients, for example, Android TV, Apple TV, and Roku TV, all apps use the same BFF, which is customized for TV apps. When all similar platform apps share a BFF, it’s only within the same class of user interface. For example, Netflix's IOS and Android apps share the same BFF, but their TV apps use a different BFF. How Do We Handle Multiple Downstream Services Efficiently? BFFs are a useful architectural pattern when dealing with a few back-end services. However, in organizations with many services, they become essential as the need to aggregate multiple downstream calls to provide user functionality grows significantly. Take, for instance, Netflix, where you want to display a user’s recommendation along with ratings, comments, languages available, CC, trailer, etc. In this scenario, multiple services are responsible for different parts of the information. The recommendation service holds the list of movies and their IDs, the movie catalog service manages item names and ratings, while the comments service tracks comments. The BFF would expose a method to retrieve the complete recommendations, which would require at least three downstream service calls, constructing a recommendations view through multiple downstream calls. From an efficiency perspective, it’s best to run as many of these calls in parallel as possible. After the initial call to the recommendations service, the subsequent calls to the rating and comments services should ideally occur simultaneously to minimize overall response time. Managing parallel and sequential calls, however, can quickly become complicated in more advanced use cases. This is where asynchronous programming models are valuable, as they simplify handling multiple asynchronous calls. Understanding failure modes is also crucial. For instance, while it might seem logical to wait for all downstream calls to succeed before responding to the client, this isn’t always the best approach. If the recommendations service is unavailable, the request can’t proceed, but if only the rating service fails, it may be better to degrade the response by omitting the rating information, instead of failing the entire request. The BFF should handle these scenarios, and the client must be capable of interpreting partial responses and rendering them correctly. Conclusion The BFF pattern is a powerful tool for organizations seeking to deliver optimized, scalable, and efficient frontends for a variety of client types. It allows for better separation of concerns, minimizes complexity in frontend development, and improves overall system performance. While the approach does come with challenges, such as maintaining multiple backends and avoiding code duplication, the benefits often outweigh the downsides for teams working in complex, multi-client environments.
Stored procedures and functions are implementing the business logic of the database. When migrating the SQL Server database to PostgreSQL, you will need to convert stored procedures and functions properly, paying attention to parameter handling, rowset retrieval, and other specific syntax constructions. SQL Server uses a dialect of SQL called Transact-SQL (or T-SQL) for stored procedures and functions, while PostgreSQL uses Procedural Language/PostgreSQL (or PL/pgSQL) for the same. These languages have significantly different syntax and capabilities, so stored procedures and functions must be carefully analyzed and converted. Also, some T-SQL features have no direct equivalents in PL/pgSQL, and therefore, alternative implementation is required for those cases. Finally, stored procedures and functions must be optimized for the PostgreSQL engine to ensure they perform efficiently. Returning a Rowset Both SQL Server and PostgreSQL allow the return of a rowset, usually the result of a SELECT query, from stored procedures or functions, but the syntax is distinguished. If the stored procedure in T-SQL contains SELECT as the last statement of the body, this means it returns rowset. PL/pgSQL requires either forward declaration of returned rowset as a table or fetching data through refcursor. When returning rowset has just a few columns with clear types, you can use the RETURNS TABLE feature of PostgreSQL. In T-SQL: SQL CREATE PROCEDURE GetCustomerOrders @CustomerID INT AS SELECT OrderID, OrderDate, Amount FROM Orders WHERE CustomerID = @CustomerID; GO In PL/pgSQL, the same may look like this: SQL CREATE OR REPLACE FUNCTION GetCustomerOrders(CustomerID INT) RETURNS TABLE(OrderID INT, OrderDate TIMESTAMP, Amount DECIMAL) AS $$ BEGIN RETURN QUERY SELECT OrderID, OrderDate, Amount FROM Orders WHERE CustomerID = GetCustomerOrders.CustomerID; END; $$ LANGUAGE plpgsql; And the caller PostgreSQL code may look like this: SQL SELECT * FROM GetCustomerOrders(5); If the returning rowset is more complicated and it is hard to determine the data type for each column, the approach above may not work. For those cases, the workaround is to use refcursor. In T-SQL: SQL CREATE PROCEDURE GetSalesByRange @DateFrom DATETIME, @DateTo DATETIME AS SELECT C.CustomerID, C.Name AS CustomerName, C.FirstName, C.LastName, C.Email AS CustomerEmail, C.Mobile, C.AddressOne, C.AddressTwo, C.City, C.ZipCode, CY.Name AS Country, ST.TicketID, TT.TicketTypeID, TT.Name AS TicketType, PZ.PriceZoneID, PZ.Name AS PriceZone, ST.FinalPrice AS Price, ST.Created, ST.TransactionType, COALESCE(VME.ExternalEventID, IIF(E.ExternalID = '', NULL, E.ExternalID), '0') AS ExternalID, E.EventID, ES.[Name] AS Section, ST.RowName, ST.SeatName FROM [Event] E WITH (NOLOCK) INNER JOIN EventCache EC WITH (NOLOCK) ON E.EventID = EC.EventID INNER JOIN SaleTicket ST WITH (NOLOCK) ON E.EventID = ST.EventID INNER JOIN EventSection ES WITH (NOLOCK) ON ST.EventSectionID = ES.EventSectionID INNER JOIN Customer C WITH (NOLOCK) ON ST.CustomerID = C.CustomerID INNER JOIN Country CY WITH (NOLOCK) ON C.CountryID = CY.CountryID INNER JOIN TicketType TT WITH (NOLOCK) ON ST.TicketTypeID = TT.TicketTypeID INNER JOIN PriceZone PZ WITH (NOLOCK) ON ST.PriceZoneID = PZ.PriceZoneID LEFT OUTER JOIN VenueManagementEvent VME ON VME.EventID = E.EventID WHERE ST.Created BETWEEN @DateFrom AND @DateTo ORDER BY ST.Created GO In PL/pgSQL: SQL CREATE OR REPLACE FUNCTION GetSalesByRange ( V_DateFrom TIMESTAMP(3), V_DateTo TIMESTAMP(3), V_rc refcursor ) RETURNS refcursor AS $$ BEGIN OPEN V_rc FOR SELECT C.CustomerID, C.Name AS CustomerName, C.FirstName, C.LastName, C.Email AS CustomerEmail, C.Mobile, C.AddressOne, C.AddressTwo, C.City, C.ZipCode, CY.Name AS Country, ST.TicketID, TT.TicketTypeID, TT.Name AS TicketType, PZ.PriceZoneID, PZ.Name AS PriceZone, ST.FinalPrice AS Price, ST.Created, ST.TransactionType, COALESCE( VME.ExternalEventID, (CASE WHEN E.ExternalID = '' THEN NULL ELSE E.ExternalID END), '0') AS ExternalID, E.EventID, ES.Name AS Section, ST.RowName, ST.SeatName FROM Event E INNER JOIN EventCache EC ON E.EventID = EC.EventID INNER JOIN SaleTicket ST ON E.EventID = ST.EventID INNER JOIN EventSection ES ON ST.EventSectionID = ES.EventSectionID INNER JOIN Customer C ON ST.CustomerID = C.CustomerID INNER JOIN Country CY ON C.CountryID = CY.CountryID INNER JOIN TicketType TT ON ST.TicketTypeID = TT.TicketTypeID INNER JOIN PriceZone PZ ON ST.PriceZoneID = PZ.PriceZoneID LEFT OUTER JOIN VenueManagementEvent VME ON VME.EventID = E.EventID WHERE ST.Created BETWEEN V_DateFrom AND V_DateTo ORDER BY ST.Created; RETURN V_rc; END; $$ LANGUAGE plpgsql; And the caller PostgreSQL code may look like this: SQL BEGIN; SELECT GetSalesByRange( '2024-01-01'::TIMESTAMP(3), '2025-01-01'::TIMESTAMP(3), 'mycursorname' ); FETCH 4 FROM mycursorname; COMMIT; Declaration of Local Variables T-SQL allows local variables to be declared everywhere inside a stored procedure or function body. PL/pgSQL requires that all local variables are declared before BEGIN keyword: SQL CREATE OR REPLACE FUNCTION CreateEvent(…) AS $$ DECLARE v_EventID INT; v_EventGroupID INT; BEGIN … END; $$ LANGUAGE plpgsql; In SQL Server, table variables can be declared as follows: SQL DECLARE @Products TABLE ( ProductID int, ProductTitle varchar(100), ProductPrice decimal (8,2) ) PostgreSQL does not support this feature; temporary tables should be used instead: SQL CREATE TEMP TABLE Products ( ProductID int, ProductTitle varchar(100), ProductPrice decimal (8,2) ) Remember that temporary tables are automatically dropped at the end of the session or the current transaction. If you need to manage the lifetime of the table explicitly, use the DROP TABLE IF EXISTS statement. Pay attention to appropriate SQL Server to PostgreSQL types mapping when converting variables declaration. Last Value of Auto-Increment Column After running INSERT-query, you may need to get the generated value of the auto-increment column. In T-SQL, it may be obtained as SQL CREATE TABLE aitest (id int identity, val varchar(20)); INSERT INTO aitest(val) VALUES ('one'),('two'),('three'); SELECT @LastID = SCOPE_IDENTITY(); PostgreSQL allows access to the last inserted value via an automatically generated sequence that always has the name {tablename}_{columnname}_seq: SQL CREATE TABLE aitest (id serial, val varchar(20)); INSERT INTO aitest(val) VALUES ('one'),('two'),('three'); LastID := currval('aitest_id_seq’); Built-In Functions When migrating stored procedures and functions from SQL Server to PostgreSQL, all specific built-in functions and operators must be converted into equivalents according to the rules below: Function CHARINDEX must be replaced by PostgreSQL equivalent POSITIONFunction CONVERT must be migrated into PostgreSQL according to the rules specified in this articleFunction DATEADD($interval, $n_units, $date) can be converted into PostgreSQL expressions that use the operator + depending on $interval value as follows: DAY / DD / D / DAYOFYEAR / DY ($date + $n_units * interval '1 day')::dateHOUR / HH($date + $n_units * interval '1 hour')::dateMINUTE / MI / N($date + $n_units * interval '1 minute')::dateMONTH / MM / M($date + $n_units * interval '1 month')::dateQUARTER / QQ / Q($date + $n_units * 3 * interval '1 month')::dateSECOND / SS / S($date + $n_units * interval '1 second')::dateWEEK / WW / WK($date + $n_units * interval '1 week')::dateWEEKDAY / DW / W($date + $n_units * interval '1 day')::dateYEAR / YY($date + $n_units * interval '1 year')::date Function DATEDIFF($interval, $date1, $date2) of SQL Server can be emulated in PostgreSQL via DATE_PART as follows: DAY / DD / D / DAYOFYEAR / DY date_part('day', $date2 - $date1)::intHOUR / HH24 * date_part('day', $date2 - $date1)::int + date_part('hour', $date2 - $date1)MINUTE / MI / N1440 * date_part('day', $date2 - $date1)::int + 60 * date_part('hour', $date2 - $date1) + date_part('minute', $date2 - $date1)MONTH / MM / M(12 * (date_part('year', $date2) - date_part('year', $date1))::int + date_part('month', $date2) - date_part('month', $date1))::intSECOND / SS / S86400 * date_part('day', $date2 - $date1)::int + 3600 * date_part('hour', $date2 - $date1) + 60 * date_part('minute', $date2 - $date1) + date_part('second', $date2 - $date1)WEEK / WW / WKTRUNC(date_part('day', $date2 - $date1) / 7)WEEKDAY / DW / Wdate_part('day', $date2 - $date1)::intYEAR / YY(date_part('year', $date2) - date_part('year', $date1))::int Every occurrence of DATEPART must be replaced by DATE_PARTSQL Server function GETDATE must be converted into PostgreSQL NOW()Conditional operator IIF($condition, $first, $second) must be converted into CASE WHEN $condition THEN $first ELSE $second ENDEvery occurrence of ISNULL must be replaced by COALESCESQL Server function REPLICATE must be converted into PostgreSQL equivalent, REPEATEvery occurrence of SPACE($n) must be replaced by REPEAT(' ', $n) Conclusion The migration of stored procedures and functions between two DBMSs is quite a complicated procedure requiring much time and effort. Although it cannot be completely automated, some available tools online could help partially automate the procedure.
Microservices and containers are revolutionizing how modern applications are built, deployed, and managed in the cloud. However, developing and operating microservices can introduce significant complexity, often requiring developers to spend valuable time on cross-cutting concerns like service discovery, state management, and observability. Dapr, or Distributed Application Runtime, is an open-source runtime for building microservices on cloud and edge environments. It provides platform-agnostic building blocks like service discovery, state management, pub/sub messaging, and observability out of the box. Dapr moved to the graduated maturity level of CNCF (Cloud Native Computing Foundation) and is currently used by many enterprises. When combined with Amazon Elastic Kubernetes Service (Amazon EKS), a managed Kubernetes service from AWS, Dapr can accelerate the adoption of microservices and containers, enabling developers to focus on writing business logic without worrying about infrastructure plumbing. Amazon EKS makes managing Kubernetes clusters easy, enabling effortless scaling as workloads change. In this blog post, we'll explore how Dapr simplifies microservices development on Amazon EKS. We'll start by diving into two essential building blocks: service invocation and state management. Service Invocation Seamless and reliable communication between microservices is crucial. However, developers often struggle with complex tasks like service discovery, standardizing APIs, securing communication channels, handling failures gracefully, and implementing observability. With Dapr's service invocation, these problems become a thing of the past. Your services can effortlessly communicate with each other using industry-standard protocols like gRPC and HTTP/HTTPS. Service invocation handles all the heavy lifting, from service registration and discovery to request retries, encryption, access control, and distributed tracing. State Management Dapr's state management building block simplifies the way developers work with the state in their applications. It provides a consistent API for storing and retrieving state data, regardless of the underlying state store (e.g., Redis, AWS DynamoDB, Azure Cosmos DB). This abstraction enables developers to build stateful applications without worrying about the complexities of managing and scaling state stores. Prerequisites In order to follow along this post, you should have the following: An AWS account. If you don’t have one, you can sign up for one.An IAM user with proper permissions. The IAM security principal that you're using must have permission to work with Amazon EKS IAM roles, service-linked roles, AWS CloudFormation, a VPC, and related resources. For more information, see Actions, resources, and condition keys for Amazon Elastic Container Service for Kubernetes and Using service-linked roles in the AWS Identity and Access Management User Guide. Application Architecture In the diagram below, we have two microservices: a Python app and a Node.js app. The Python app generates order data and invokes the /neworder endpoint exposed by the Node.js app. The Node.js app writes the incoming order data to a state store (in this case, Amazon ElastiCache) and returns an order ID to the Python app as a response. By leveraging Dapr's service invocation building block, the Python app can seamlessly communicate with the Node.js app without worrying about service discovery, API standardization, communication channel security, failure handling, or observability. It implements mTLS to provide secure service-to-service communication. Dapr handles these cross-cutting concerns, allowing developers to focus on writing the core business logic. Additionally, Dapr's state management building block simplifies how the Node.js app interacts with the state store (Amazon ElastiCache). Dapr provides a consistent API for storing and retrieving state data, abstracting away the complexities of managing and scaling the underlying state store. This abstraction enables developers to build stateful applications without worrying about the intricacies of state store management. The Amazon EKS cluster hosts a namespace called dapr-system, which contains the Dapr control plane components. The dapr-sidecar-injector automatically injects a Dapr runtime into the pods of Dapr-enabled microservices. Service Invocation Steps The order generator service (Python app) invokes the Node app’s method, /neworder. This request is sent to the local Dapr sidecar, which is running in the same pod as the Python app. Dapr resolves the target app using the Amazon EKS cluster’s DNS provider and sends the request to the Node app’s sidecar.The Node app’s sidecar then sends the request to the Node app microservice.Node app then writes the order ID received from the Python app to Amazon ElasticCache.The node app sends the response to its local Dapr sidecar.Node app’s sidecar forwards the response to the Python app’s Dapr sidecar. Python app side car returns the response to the Python app, which had initiated the request to the Node app's method /neworder. Deployment Steps Create and Confirm an EKS Cluster To set up an Amazon EKS (Elastic Kubernetes Service) cluster, you'll need to follow several steps. Here's a high-level overview of the process: Prerequisites Install and configure the AWS CLIInstall eksctl, kubectl, and AWS IAM Authenticator 1. Create an EKS cluster. Use eksctl to create a basic cluster with a command like: Shell eksctl create cluster --name my-cluster --region us-west-2 --node-type t3.medium --nodes 3 2. Configure kubectl. Update your kubeconfig to connect to the new cluster: Shell aws eks update-kubeconfig --name my-cluster --region us-west-2 3. Verify the cluster. Check if your nodes are ready: Shell kubectl get nodes Install DAPR on Your EKS cluster 1. Install DAPR CLI: Shell wget -q https://raw.githubusercontent.com/dapr/cli/master/install/install.sh -O - | /bin/bash 2. Verify installation: Shell dapr -h 3. Install DAPR and validate: Shell dapr init -k --dev dapr status -k The Dapr components statestore and pubsub are created in the default namespace. You can check it by using the command below: Shell dapr components -k Configure Amazon ElastiCache as Your Dapr StateStore Create Amazon ElastiCache to store the state for the microservice. In this example, we are using ElastiCache serverless, which quickly creates a cache that automatically scales to meet application traffic demands with no servers to manage. Configure the security group of the ElastiCache to allow connections from your EKS cluster. For the sake of simplicity, keep it in the same VPC as your EKS cluster. Take note of the cache endpoint, which we will need for the subsequent steps. Running a Sample Application 1. Clone the Git repo of the sample application: Shell git clone https://github.com/dapr/quickstarts.git 2. Create redis-state.yaml and provide an Amazon ElasticCache endpoint for redisHost: YAML apiVersion: dapr.io/v1alpha1 kind: Component metadata: name: statestore namespace: default spec: type: state.redis version: v1 metadata: - name: redisHost value: redisdaprd-7rr0vd.serverless.use1.cache.amazonaws.com:6379 - name: enableTLS value: true Apply yaml configuration for state store component using kubectl. Shell kubectl apply -f redis-state.yaml 3. Deploy microservices with the sidecar. For the microservice node app, navigate to the /quickstarts/tutorials/hello-kubernetes/deploy/node.yaml file and you will notice the below annotations. It tells the Dapr control plane to inject a sidecar and also assigns a name to the Dapr application. YAML annotations: dapr.io/enabled: "true" dapr.io/app-id: "nodeapp" dapr.io/app-port: "3000" Add an annotation service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing" in node.yaml to create AWS ELB. YAML kind: Service apiVersion: v1 metadata: name: nodeapp annotations: service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing" labels: app: node spec: selector: app: node ports: - protocol: TCP port: 80 targetPort: 3000 type: LoadBalancer Deploy the node app using kubectl. Navigate to the directory /quickstarts/tutorials/hello-kubernetes/deploy and execute the below command. Shell kubectl apply -f node.yaml Obtain the AWS NLB, which appears under External IP, on the output of the below command. Shell kubectl get svc nodeapp http://k8s-default-nodeapp-3a173e0d55-f7b14bedf0c4dd8.elb.us-east-1.amazonaws.com Navigate to the /quickstarts/tutorials/hello-kubernetes directory, which has sample.json file to execute the below step. Shell curl --request POST --data "@sample.json" --header Content-Type:application/json http://k8s-default-nodeapp-3a173e0d55-f14bedff0c4dd8.elb.us-east-1.amazonaws.com/neworder You can verify the output by accessing /order endpoint using the load balancer in a browser. Plain Text http://k8s-default-nodeapp-3a173e0d55-f7b14bedff0c4dd8.elb.us-east-1.amazonaws.com/order You will see the output as {“OrderId”:“42”} Next, deploy the second microservice Python app, which has a business logic to generate a new order ID every second and invoke the Node app’s method /neworder. Navigate to the directory /quickstarts/tutorials/hello-kubernetes/deploy and execute the below command. Shell kubectl apply -f python.yaml 4. Validating and testing your application deployment. Now that we have both the microservices deployed. The Python app is generating orders and invoking /neworder as evident from the logs below. Shell kubectl logs --selector=app=python -c daprd --tail=-1 SystemVerilog time="2024-03-07T12:43:11.556356346Z" level=info msg="HTTP API Called" app_id=pythonapp instance=pythonapp-974db9877-dljtw method="POST /neworder" scope=dapr.runtime.http-info type=log useragent=python-requests/2.31.0 ver=1.12.5 time="2024-03-07T12:43:12.563193147Z" level=info msg="HTTP API Called" app_id=pythonapp instance=pythonapp-974db9877-dljtw method="POST /neworder" scope=dapr.runtime.http-info type=log useragent=python-requests/2.31.0 ver=1.12.5 We can see that the Node app is receiving the requests and writing to the state store Amazon ElasticCache in our example. Shell kubectl logs —selector=app=node -c node —tail=-1 SystemVerilog Got a new order! Order ID: 367 Successfully persisted state for Order ID: 367 Got a new order! Order ID: 368 Successfully persisted state for Order ID: 368 Got a new order! Order ID: 369 Successfully persisted state for Order ID: 369 In order to confirm whether the data is persisted in Amazon ElasticCache we access the endpoint /order below. It returns the latest order ID, which was generated by the Python app. Plain Text http://k8s-default-nodeapp-3a173e0d55-f7b14beff0c4dd8.elb.us-east-1.amazonaws.com/order You will see an output with the most recent order as {“OrderId”:“370”}. Clean up Run the below command to delete the deployments Node app and Python app along with the state store component. Navigate to the /quickstarts/tutorials/hello-kubernetes/deploy directory to execute the below command. YAML kubectl delete -f node.yaml kubectl delete -f python.yaml You can tear down your EKS cluster using the eksctl command and delete Amazon ElastiCache. Navigate to the directory that has the cluster.yaml file used to create the cluster in the first step. Shell eksctl delete cluster -f cluster.yaml Conclusion Dapr and Amazon EKS form a powerful alliance for microservices development. Dapr simplifies cross-cutting concerns, while EKS manages Kubernetes infrastructure, allowing developers to focus on core business logic and boost productivity. This combination accelerates the creation of scalable, resilient, and observable applications, significantly reducing operational overhead. It's an ideal foundation for your microservices journey. Watch for upcoming posts exploring Dapr and EKS's capabilities in distributed tracing and observability, offering deeper insights and best practices.
Heroku now officially supports .NET! .NET developers now have access to the officially supported buildpack for .NET, which means you can now deploy your .NET apps onto Heroku with just one command: git push heroku main. Gone are the days of searching for Dockerfiles or community buildpacks. With official support, .NET developers can now run any .NET application (version 8.0 and higher) on the Heroku platform. Being on the platform means you also get: Simple, low-friction deploymentScaling and service managementAccess to the add-on ecosystemSecurity and governance features for enterprise use Intrigued? Let’s talk about what this means for .NET developers. Why This Matters for .NET Developers In my experience, running an app on Heroku is pretty easy. But deploying .NET apps was an exception. You could deploy on Heroku, but there wasn’t official support. One option was to wrap your app in a Docker container. This meant creating a Dockerfile and dealing with all the maintenance that comes along with that approach. Alternatively, you could find a third-party buildpack, but that introduced another dependency into your deployment process, and you’d lose time trying to figure out which community buildpack was the right one for you. Needing to use these workarounds was unfortunate, as Heroku’s seamless deployment is supposed to make it easy to create and prototype new apps. Now, with official buildpack support, the deployment experience for .NET developers is smoother and more reliable. Key Benefits of .NET on Heroku The benefits of the new update center around simplicity and scalability. It all begins with simple deployment. Just one git command… and your deployment begins. No need to start another workflow or log into another site every time; just push your code from the command line, and Heroku takes care of the rest. Heroku’s official .NET support currently includes C#, Visual Basic, and F# projects for .NET and ASP.NET Core frameworks (version 8.0 and higher). This means that a wide variety of .NET projects are now officially supported. Want to deploy a Blazor app alongside your ASP.NET REST API? You can do that now. Coming into the platform also means you can scale as your app grows. If you need to add another service using a different language, you can deploy that service just as easily as your original app. Or you can easily scale your dynos to match peak load requirements. This scaling extends to Heroku’s ecosystem of add-ons, making it easy for you to add value to your application with supporting services while keeping you and your team focused on your core application logic. In addition to simple application deployment, the platform also supports more advanced CI/CD and DevOps needs. With Heroku Pipelines, you have multiple deployment environment support options and can set up review apps so code reviewers can access a live version of your app for each pull request. And all of this integrates tightly with GitHub, giving you automatic deployment triggers to streamline your dev flow. Getting Started Let’s do a quick walk-through on how to get started. In addition to your application and Git, you will also need the Heroku CLI installed on your local machine. Initialize the CLI with the heroku login command. This will take you to a browser to log into your Heroku account: Once you’re logged in, navigate to your .NET application folder. In that folder, run the following commands: Plain Text ~/project$ heroku create ~/project$ heroku buildpacks:add heroku/dotnet Now, you’re ready to push your app! You just need one command to go live: Plain Text ~/project$ git push heroku main That’s it! For simpler .NET applications, this is all you need. Your application is now live at the app URL provided in the response to your heroku create command. To see it again, you can always use heroku info. Or, you can run heroku open to launch your browser at your app URL. If you can’t find the URL, log in to the Heroku Dashboard. Find your app and click on Open app. You’ll be redirected to your app URL. If you have a more complex application or one with multiple parts, you will need to define a Procfile, which will tell Heroku how to start up your application. Don’t be intimidated! Many Procfiles are just a couple of lines. For more in-depth information, check out the Getting Started on Heroku with .NET guide. Now, we’ve got another question to tackle. Who Should Care? The arrival of .NET on Heroku is relevant to anyone who wants to deploy scalable .NET services and applications seamlessly. For solo devs and startups, the platform’s low friction and scaling take away the burden of deployment and hosting. This allows small teams to focus on building out their core application logic. These teams are also not restricted by their app’s architecture, as Heroku supports both large single-service applications as well as distributed microservice apps. Enterprise teams are poised to benefit from this as well. .NET has historically found much of its adoption in the enterprise, and the addition of official support for .NET to Heroku means that these teams can now combine their .NET experience with the ease of deploying to the Heroku platform. Heroku’s low friction enables rapid prototyping of new applications, and Dyno Formations make it easier to manage and scale a microservice architecture. Additionally, you can get governance through Heroku Enterprise, enabling the security and controls that larger enterprises require. Finally, .NET enthusiasts from all backgrounds and skill levels can now benefit from this new platform addition. By going with a modern PaaS, you can play around with apps and projects of all sizes hassle-free. Wrap-Up That’s a brief introduction to official .NET support on Heroku! It’s now easier than ever to deploy .NET applications of all sizes to Heroku. What are you going to build and deploy first? Let me know in the comments!
February 21, 2025
by
CORE
Implementing SOLID Principles in Android Development
February 20, 2025 by
How Open Source Can Elevate Your Career as a Software Engineer
February 20, 2025
by
CORE
Designing a Blog Application Using Document Databases
February 21, 2025 by
Controlling Access to Google BigQuery Data
February 21, 2025 by
A Comprehensive Guide to Generative AI Training
February 21, 2025 by
Controlling Access to Google BigQuery Data
February 21, 2025 by
Hexagonal Architecture: A Lyrics App Example Using Java
February 21, 2025 by
Deduplication of Videos Using Fingerprints, CLIP Embeddings
February 21, 2025 by
Beating the 100-Scheduled-Job Limit in Salesforce
February 21, 2025 by
Hexagonal Architecture: A Lyrics App Example Using Java
February 21, 2025 by
Terraform State File: Key Challenges and Solutions
February 21, 2025 by
Deduplication of Videos Using Fingerprints, CLIP Embeddings
February 21, 2025 by
Terraform State File: Key Challenges and Solutions
February 21, 2025 by
Hexagonal Architecture: A Lyrics App Example Using Java
February 21, 2025 by
A Comprehensive Guide to Generative AI Training
February 21, 2025 by
Deduplication of Videos Using Fingerprints, CLIP Embeddings
February 21, 2025 by