Using Modules in Node.js

Node.js is an event-driven, server-side JavaScript environment. Node.js runs JS using the V8 engine developed by Google for use in their Chrome web browser. Leveraging V8 allows Node.js to provide a server-side runtime environment that compiles and executes JS at lightning speeds.

The Module System

This article covers the Node’s module system and the different categories of the Node.js modules.

Application Modularization

Like most programming languages, Node.js uses modules as a way of organizing code. The module system allows you to organize your code, hide information, and only expose the public interface of a component using module.exports.

Node.js uses the CommonJS specification for its module system:

  • Each file is its own module, for instance, in the following example, jsand math.js are both modules
  • Each file has access to the current module definition using the modulevariable
  • The export of the current module is determined by the module.exportsvariable
  • To import a module, use the globally available requirefunction

Take a look at a simple example:

// math.js file

function add(a, b)

{

  return a + b;

}

…

…

module.exports =

{

  add,

  mul,

  div,

};

// index.js file

const math = require('./math');

console.log(math.add(30, 20)); // 50

To call other functions such as mul and div, use object destructuring as an alternative when requiring the module, for example, const { add } = require(‘./math’);. The code files for the section The Module System are placed at Code/Lesson-1/b-module-system.

Module Categories

You can place Node.js modules into three categories:

  • Built-in (native) modules: These are modules that come with Node.js itself; you don’t have to install them separately.
  • Third-party modules: These are modules that are often installed from a package repository. npm is a commonly used package repository, but you can still host packages on GitHub, your own private server, and so on.
  • Local modules: These are modules that you have created within your application, like the example given previously.

Built-In Modules

These are modules that can be used straight away without any further installation. All you need to do is to require them. There are quite a lot of them, but here are a few that you are likely to come across when building web applications:

  • assert: Provides a set of assertion tests to be used during unit testing
  • buffer: To handle binary data
  • child_process: To run a child process
  • crypto: To handle OpenSSL cryptographic functions
  • dns: To do DNS lookups and name resolution functions
  • events: To handle events
  • fs: To handle the filesystem
  • httpor https: For creating HTTP(s) servers
  • stream: To handle streaming data
  • util: To access utility functions like deprecate (for marking functions as deprecated), format (for string formatting), inspect (for object debugging), and so on

For example, the following code reads the content of the lesson-1/temp/sample.txt file using the in-built fs module:

const fs = require('fs');

let file = `${__dirname}/temp/sample.txt`;

fs.readFile(file, 'utf8', (err, data) =>

{

  if (err) throw err;

  console.log(data);

});

npm – Third-Party Module Registry

Node Package Manager (npm) is the package manager for JavaScript and the world’s largest software registry, enabling developers to discover packages of reusable code. To install an npm package, you only need to run the npm install <package-name> command within your project directory.

Here’s a simple example. If you want to use a package (library) like request in your project, you can run the following command on your Terminal, within your project directory:

npm install request

To use it in your code, you should require it, like any other module:

const request = require('request');

request('http://www.example.com', (error, response, body) =>

{

  if (error) console.log('error:', error); // Print the error if one occurred

  else console.log('body:', body); // Print the HTML for the site.

});

More details about npm can be found at https://docs.npmjs.com/. When you run the npm install <module-name> command on your project for the first time, the node_modules folder gets created at the root of your project.

Scanning for node_modules

It’s worth noting how Node.js goes about resolving a particular required module. For example, if a /home/tony/projects/foo.js file has a require call require(‘bar’), Node.js scans the filesystem for node_modules in the following order. The first bar.js that is found is returned as follows:

  • /home/tony/projects/node_modules/bar.js
  • /home/tony/node_modules/bar.js
  • /home/node_module/bar.js
  • /node_modules/bar.js

Node.js looks for node_moduels/bar in the current folder followed by every parent folder until it reaches the root of the filesystem tree for the current file. Note that the module foo/index.js can be required as foo, without specifying an index and will be picked by default.

Handy npm Commands

Now dive a little deeper into npm, by looking at some of the handy npm commands that you will often need:

  • npm init: Initializes a Node.js project. This should be run at the root of your project and will create a respective jsonfile. This file usually has the following parts (keys):
  • name: Name of the project.
  • version: Version of the project.
  • description: Project description.
  • main: The entry-point to your project, the main file.
  • scripts: This will be a list of other keys whose values will be the scripts to be run, for example, test, dev-server. Therefore, to run this script, you will only need to type commands such as npm run dev-server, npm run test, and so on.
  • dependencies: List of third-party packages and their versions used by the project. Whenever you do npm install <package-name> –save, this list is automatically updated.
  • devDependencies: List of third-party packages that are not required for production, but only during development. This will usually include packages that help to automate your development workflow, for example, task runners like gulp.js. This list is automatically updated whenever you do npm install <package-name> –save-dev.
  • npm install: This will install all the packages, as specified in the jsonfile.
  • npm install <package-name> <options>:
  • The –saveoption installs the package and saves the details in the json file.
  • The –save-devoption installs the package and saves the details in the json, under devDependencies.
  • The –globaloption installs the package globally in the whole system, not only in the current system. Due to permissions, this might require running the command with administrator rights, for example, sudo npm install <package-name> –global.
  • npm install <package-name>@<version>, installs a specific version of a package. Usually, if a version is not specified, the latest version will be installed.
  • npm list: Lists the packages that have been installed for the project, reading from what is installed in node_modules.
  • npm uninstall <package-name>: Removes an installed package.
  • npm outdated: Lists installed packages that are outdated, that is, newer versions have been released.

Local Modules

You have already looked at how local modules are loaded from the previous example that had math.js and index.js.

Since JavaScript Object Notation (JSON) is such an important part of the web, Node.js has fully embraced it as a data format, even locally. You can load a JSON object from the local filesystem the same way you load a JavaScript module. During the module loading sequence, whenever a file.js is not found, Node.js looks for a file.json.

See the example files in lesson-1/b-module-system/1-basics/load-json.js:

const config = require('./config/sample');

console.log(config.foo); // bar

Here, you will notice that once required, the JSON file is transformed into a JavaScript object implicitly. Other languages will have you read the file and perhaps use a different mechanism to convert the content into a data structure such as a map, a dictionary, and so on.

For local files, the extension is optional, but should there be a conflict, it might be necessary to specify the extension. If you have both a sample.js and a sample.json file in the same folder, the .js file will be picked by default; it would be prudent to specify the extension, for example const config = require(‘./config/sample.json’).

When you run npm install, without specifying the module to install, npm will install the list of packages specified (under dependencies and devDependencies in the package.json file in your project). If package.json does not exist, it will give an error indicating that no such file has been found.

Activity: Running Basic Node.js Code

Open the IDE and the Terminal to implement this solution and learn how to write a basic Node.js file and run it. Write a very basic mathematical library with handy mathematical functions using the following steps:

  1. Create your project directory (folder), where all the code for this will be kept. Inside this directory, create another directory named lesson-1, and inside it, create another directory called activity-a. All this can be done using the following command:
mkdir -p beginning-nodejs/lesson-1/activity-a
  1. Inside activity-a, create a file using the touch maths.js
  2. Inside this file, create the following functions:
  • add: This takes any two numbers and returns the sum of both, for example, add(2, 5)returns 7
  • sum: Unlike add, sum takes any number of numbers and returns their sum, for example, sum(10, 5, 6)returns 21
  1. After these functions, write the following code to act as tests for your code:
console.log(add(10, 6)); // 16

console.log(sum(10, 5, 6)); // 21
  1. Now, on the Terminal, change directory to lesson-1.
  2. To run the code, run the following command:
node activity-a/math.js

The 16 and 21 values should be printed out on the Terminal.

Activity: Using a Third-Party Package

This activity will build upon the, Running Basic Node.js activity. If the argument is a single array, sum up the numbers, and if it’s more than one array, first combine the arrays into one before summing up. Use the concat() function from lodash, which is a third-party package that you need to install.

Now create a new function, sumArray, which can sum up numbers from one or more arrays using the following steps:

  1. Inside Lesson-1, create another folder called activity-b.
  2. On the Terminal, change directory to activity-band run the following command:
npm init
  1. This will take you to an interactive prompt; just press Enter all the way, leaving the answers as suggested defaults. The aim here is to get a json file, which will help organize your installed packages.
  2. Since you’ll be using lodash, install it. Run the following command:
npm install lodash--save

Notice that you’re adding the –save option on your command so that the package installed can be tracked in package.json. When you open the package.json file created in step 3, you will see an added dependencies key with the details.

  1. Create a jsfile in the activity-b directory and copy the math.js code from ActivityRunning Basic Node.js into this file.
  2. Now, add the sumArrayfunction right after the sum
  3. Start with requiring lodash, which you installed in step 4 since you’re going to use it in the sumArrayfunction:
const _ = require('lodash');
  1. The sumArrayfunction should call the sum function to reuse your code. Use the spread operator on the array. See the following code:
function sumArray()

{

  let arr = arguments[0];

  if (arguments.length > 1)

  {

    arr = _.concat(...arguments);

  }

  // reusing the sum function

  // using the spread operator (...) since

  // sum takes an argument of numbers

  return sum(...arr);

}
  1. At the end of the file, export the three functions, add, sum, and sumArraywith exports.
  2. In the same activity-bfolder, create a file, js.
  3. In jsfile, require ./math.js and go ahead to use sumArray:
// testing

console.log(math.sumArray([10, 5, 6])); // 21

console.log(math.sumArray([10, 5], [5, 6], [1, 3])) // 30
  1. Run the following code on the Terminal:
node index.js

You should see 21 and 30 printed out.

If you found this article interesting, you can explore Anthony Nandaa’s Beginning API Development with Node.js to learn everything you need to get up and running with cutting-edge API development using JavaScript and Node.js. Beginning API Development with Node.js begins with the basics of Node.js in the context of backend development and quickly leads you through the creation of an example client that pairs up with a fully authenticated API implementation.

How to Work with the Latest JS features in React

Working with the latest javascript features in React

React is mainly written with modern JavaScript (ES6, ES7, and ES8). If you want to take advantage of React, there are some modern JS features that you should master to get the best results for your React applications.

In this article, you’ll learn the essential JS features so that you are ready to start working on your first React application.

How to do it

In this section, you’ll see how to use the most important JS features in React:

  • let and const: The new way to declare variables in JavaScript is using let or const. You can use let to declare variables that can change their value but in a block scope. The difference between let and var is that let is a block scoped variable that cannot be global, and with var, you can declare a global variable, for example:
var name = 'Carlos Santana';

let age = 30;



console.log(window.name); // Carlos Santana

console.log(window.age);  // undefined
  • The best way to understand block scope is by declaring a forloop with var and let. First, use var and see its behavior:
for (var i = 1 ; i <= 10; i++) {

console.log(i); // 1, 2, 3, 4... 10

    }



console.log(i); // Will print the last value of i: 10
  • If you write the same code with let, this will be the result:
for (let i = 1 ; i <= 10; i++) {

console.log(i); // 1, 2, 3, 4... 10

    }



console.log(i); // Uncaught ReferenceError: i is not defined

  • With const, you can declare constants, which means that the value can’t be changed (except for arrays and objects):
const pi = 3.1416;

pi = 5; // Uncaught TypeError: Assignment to constant variable.
  • If you declare an array with const, you can manipulate the array elements (add, remove, or modify elements):
constcryptoCurrencies = ['BTC', 'ETH', 'XRP'];



// Adding ERT: ['BTC', 'ETH', 'XRP', 'ERT'];

cryptoCurrencies.push('ERT');



    // Will remove the first element: ['ETH', 'XRP', 'ERT'];

cryptoCurrencies.shift();



// Modifying an element

cryptoCurrencies[1] = 'LTC'; // ['ETH', 'LTC', 'ERT'];
  • Also, using objects, you can add, remove, or modify the nodes:
const person = {

name: 'Carlos Santana',

age: 30,

email: 'carlos@milkzoft.com'

    };



// Adding a new node...

person.website = 'https://www.codejobs.com';



// Removing a node...

deleteperson.email;



// Updating a node...

person.age = 29;
  • Spread operator:The spread operator (…) splits an iterable object into individual values. In React, it can be used to push values into another array, for example when you want to add a new item to a to-do list utilizing setState:
this.setState({

items: [

...this.state.items, // Here we are spreading the current items

        {

task: 'My new task', // This will be a new task in our Todo list.

        }

      ]

    });
  • Also, the Spread operator can be used in React to spread attributes (props) in JSX:
render() {

const props = {};



      props.name = 'Carlos Santana';

props.age = 30;

props.email = 'carlos@milkzoft.com';



return<Person{...props}/>;

    }
  • Rest parameter:The rest parameter is also represented by …. The last parameter in a function prefixed with … is called the rest parameter. The rest parameter is an array that will contain the rest of the parameters of a function when the number of arguments exceeds the number of named parameters:
functionsetNumbers(param1, param2, ...args) {

// param1 = 1

 // param2 = 2

// args = [3, 4, 5, 6];

console.log(param1, param2, ...args); // Log: 1, 2, 3, 4, 5, 6

    }



setNumbers(1, 2, 3, 4, 5, 6);
  • Destructuring: The destructuring assignment javascript feature is the most used feature in React. It is an expression that allows you to assign the values or properties of an iterable object to variables. Generally, with this, you can convert your component props into variables (or constants):   
// Imagine we are on our <Person> component and we are

    // receiving the props (in this.props): name, age and email.

render() {

      // Our props are:

      // { name: 'Carlos Santana', age: 30, email:

'carlos@milkzoft.com' }

console.log(this.props);

const{ name, age, email } = this.props;



      // Now we can use the nodes as constants...

console.log(name, age, email);



return (

<ul>

<li>Name: {name}</li>

<li>Age: {age}</li>

<li>Email: {email}</li>

</ul>

      );

}



// Also the destructuring can be used on function parameters

const Person = ({ name, age, email }) => (

<ul>

<li>Name: {name}</li>

<li>Age: {age}</li>

<li>Email: {email}</li>

</ul>

    );
  • Arrow functions: In Javascript ES6 provides a new way to create functions using the => These functions are called arrow functions. This new method has a shorter syntax, and the arrow functions are anonymous functions. In React, arrow functions are used as a way to bind the this object in your methods instead of binding it in the constructor:
    class Person extends Component {

showProps = () => {

        console.log(this.props); // { name, age, email... }

      }

render() {

return (       
// Consoling props: 
{this.showProps()}
);       }     }
  • Template literals: The template literal is a new way to create a string using backticks (` `) instead of single quotes (‘ ‘)   or double quotes (” “). React uses template literals to concatenate class names or render a string using a ternary operator:
render() {

const { theme } = this.props;



return (
<div>Some content here...</div>
);     }
  • Map: The map()method returns a new array with the results of calling a provided function on each element in the calling array. Map use is widespread in React and mainly used to render multiple elements inside a React component.For example, it can be used to render a list of tasks:
render() {

const tasks = [

{ task: 'Task 1' },

{ task: 'Task 2' },

{ task: 'Task 3' }

      ];



return (

<ul>

          {tasks.map((item, key) =><likey={key}>{item.task}</li>}

</ul>

      );

    }
  • assign(): The Object.assign()method is used to copy the values of all enumerable own properties from one or more source objects to a target object. It will return the target object. This method is used mainly with Redux to create immutable objects and return a new state to the reducers:
export default functioncoinsReducer(state = initialState, action) {

switch (action.type) {

caseFETCH_COINS_SUCCESS: {

const { payload: coins } = action;



returnObject.assign({}, state, {

coins

          });

        }



default:

return state;

      }

    };
  • Classes: JavaScript classes, introduced in ES6, are mainly a new syntax for the existing prototype-based inheritance. Classes are functions and are not hoisted. React uses classes to create class Components:
import React, { Component } from 'react';



class Home extends Component {

render() {

return<h1>I'm Home Component</h1>;

      }

    }



export default Home;
  • Static methods: Static methods are not called on instances of the class. Instead, they’re called on the class itself. These are often utility functions, such as functions to create or clone objects. In React, they can be used to define the PropTypes in a component:
import React, { Component } from 'react';

import PropTypes from 'prop-types';

import logo from '../../images/logo.svg';



class Header extends Component {

staticpropTypes = {

title: PropTypes.string.isRequired,

url: PropTypes.string

      };



render() {

const {

title = 'Welcome to React',

url = 'http://localhost:3000'

        } = this.props;



return (

<header className="App-header">

<a href={url}>

<imgsrc={logo}className="App-logo"alt="logo"/>

</a>

<h1 className="App-title">{title}</h1>

</header>

        );

      }

    }



export default Header;
  • Promises:The Promise object represents the eventual completion (or failure) of an asynchronous operation and its resulting value. Use promises in React to handle requests using axios or fetch; also, you can use Promises to implement server-side rendering.
  • async/await: The async function declaration defines an asynchronous function, which returns an AsyncFunction This can also be used to perform a server request, for example, using axios:
Index.getInitialProps = async () => {

consturl = 'https://api.coinmarketcap.com/v1/ticker/';

const res = awaitaxios.get(url);



return {

coins: res.data

      };

    };

If you found this article interesting, you can explore React Cookbook, which covers UI development, animations, component architecture, routing, databases, testing, and debugging with React. React Cookbook will save you from a lot of trial and error and developmental headaches, and you’ll be on the road to becoming a React expert.

Working with CDI Bean in Java

Creating your CDI bean

A CDI bean is an application component that encapsulates some business logic. Beans can be used either by a Java code or by the unified EL (expression language used in JSP and JSF technologies). The beans’ life cycles are managed by the container and can be injected into other beans. To define a bean, all you need to do is to write a POJO and declare it to be a CDI bean. To declare this, there are two primary approaches:

  • Using annotations
  • Using the xml file

Both ways should work; however, folks prefer using annotations over XML as it’s handy and included in the actual coding context. So, why is XML still there? Well, that’s because annotations are relatively new in Java (released in Java 5). Until they were introduced, there was no other way in Java other than XML to provide configuration information to the application server. And since then, it continued to be just another way, alongside the annotations approach.

Moreover, if both are used together, XML is going to override annotations. Some developers and application administrators tend to perform temporary changes or hot-fixes at times, by overriding some hard-coded programmatic configuration values, using external XML files. It’s worth mentioning that this approach is not a recommended way to actually deploy things into your production.

In this article, you’ll use the annotations approach. Now, start by defining your first CDI bean:

First CDI bean

Perform the below steps:

  1. Defining a CDIbean – Start the first step by creating a new Java class with the name MyPojo, and then write the following code:
@Dependent

public class MyPojo

    public String getMessage() {

        return "Hello from MyPojo !";

    }

}

This bean is nothing more than a plain old Java object, annotated with the @Dependent annotation. This annotation declares that your POJO is a CDI component, which is called the dependent scope. The dependent scope tells the CDI context that whenever you request an injection to this bean, a new instance will be created.

  1. Now, inject your baby CDIbean into another component. Use a servlet as an example to this. Create a servlet named ExampleServlet and write the following code:
@WebServlet(urlPatterns = "/cdi-example-1")

public class ExampleServlet extends HttpServlet {



    @Inject

    private MyPojo myPojo;

    

    @Override

    protected void doGet(HttpServletRequest req, HttpServletResponse resp)

            throws ServletException, IOException {

        resp.getOutputStream().println(myPojo.getMessage());

    }



}

You have used the @Inject annotation to obtain an instance to the MyPojo class. Now run the application and visit http://localhost:8080/EnterpriseApplication1-war/cdi-example-1. You should see a page with the following text:

Hello from MyPojo !

Congratulations! You have created and used your first CDI bean.

Providing alternative implementations to your bean

One of the greatest features of CDI is that you can provide two or more different implementations to the same bean. This is very useful if you wish to do one of the following:

  • Handling client-specific business logic that is determined at runtime; for example, providing different payment mechanisms for a purchase transaction
  • Supporting different versions for different deployment scenarios; for example, providing an implementation that handles taxes in the USA and another one for Europe
  • Easier management for test-driven development; for example, you can provide a primary implementation for production and another mocked one for testing

To do this, you should first rewrite your bean as an abstract element (abstract class or interface) to provide different implementations according to the basic OOP principles. Now rewrite your bean to be an interface as follows:

public interface MyPojo {

    String getMessage();

}

Create an implementation class to your new interface:

@Dependent

public class MyPojoImp implements MyPojo{



    @Override

    public String getMessage() {

        return "Hello CDI 2.0 from MyPojoImp";

    }

}

Now, without any modifications to the servlet class, you can test re-run your example; it should give the following output:

Hello from MyPojoImp !

What happened at runtime? The container received your request to inject a MyPojo instance. Since the container has detected your annotation over an interface and not a class stuffed with an actual implementation, it has started looking for a concrete class that implements this interface. After this, the container has detected the MyPojoImp class that satisfies this criterion. Therefore, it has instantiated and injected it for you.

Now provide different implementations. For this, you’ll need to create a new class that implements the MyPojo interface. Create a class called AnotherPojoImp as follows:

@Dependent

public class AnotherPojoImp implements MyPojo{



    @Override

    public String getMessage() {

        return "Hello CDI 2.0 from AnotherPojoImp";

    }

}

Seems simple, right? But if you checked your servlet code again and if you were in the container shoes, how would you be able to determine which implementation should be injected at runtime? If you tried to run the previous example, you will end up with the following exception:

Ambiguous dependencies for type MyPojo with qualifiers @Default

You have an ambiguity here, and there should be some meaning to specify which implementation version should be used at runtime. In CDI, this is achieved using qualifiers.

Using qualifiers

A qualifier is a user-defined annotation that is used to tell the container which version of the bean implementation you wish to use at runtime. The idea of qualifiers is too simple; you define a qualifier, and then you annotate both the bean and injection point with this qualifier.

Now define your first qualifier for the newly created bean implementation and create a new annotation with the following code:

@Qualifier

@Retention(RUNTIME)

@Target({TYPE, METHOD, FIELD, PARAMETER})

public @interface AnotherImp {



}

As you see, the qualifier is a custom-defined annotation, which itself is annotated with the @Qualifier annotation. @Qualifier tells the container that this annotation will act as a qualifier, while @Retention(RUNTIME) tells the JVM that this annotation should be available for reflection use at runtime. This way the container can check for this annotation at runtime. @Target{TYPE, METHOD, FIELD, PARAMETER} tells the compiler that this annotation can be used on types, methods, fields, and parameters. Note that the @Qualifier annotation is the key annotation here.

Here is the summary of the annotations to make it clearer:

Annotation Description
@Qualifier Tells CDI that this annotation is going to be used to distinguish between different implementations to the same interface.
@Retention(RUNTIME) Tells JVM that this annotation is intended to be used at runtime. Required for qualifiers.
@Target({TYPE, METHOD, FIELD, PARAMETER}) Tells JVM that this annotation can be used on the mentioned syntax elements.

Now, add the @AnotherImp annotation to AnotherPojoImp as follows:

@Dependent

@AnotherImp

public class AnotherPojoImp implements MyPojo{



    @Override

    public String getMessage() {

        return "Hello from AnotherPojoImp";

    }

}

The annotation’s role here is that it tells the container that this version of the class is called AnotherImp. Now you can reference this version by modifying the servlet as follows:

@WebServlet(urlPatterns = "/cdi-example")

public class ExampleServlet extends HttpServlet {



    @Inject @AnotherImp

    private MyPojo myPojo;

    

    ...

}

With Example

Hello from AnotherPojoImp

But how can you reference the original implementation MyPojoImp? There are two options available to do this:

  • Defining another qualifier for MyPojoImp, like the earlier example
  • Using the default qualifier

The default qualifier, as the name suggests, is the default one for any CDI bean that has not been explicitly qualified. Although an explicit declaration for the default qualifier is considered redundant and useless, it’s possible to explicitly declare your CDI bean as a default one using the @Default annotation, as shown in the following revision to the MyPojoImp class:

@Default

@Dependent

public class MyPojoImp implements MyPojo{

   ...

}

Again, @Default is redundant, but you should consider its existence even if you have not explicitly declared it. Now, to reference the MyPojoImp from the servlet, rewrite it as follows:

@WebServlet(urlPatterns = "/cdi-example")

public class ExampleServlet extends HttpServlet {



    @Inject @Default

    private MyPojo myPojo;

    

    ...

}

This way, the original MyPojoImp implementation will be injected instead. And likewise, you can eliminate the @Default annotation, as the default implementation will be used by default!

If you found this article interesting, you can explore Abdalla Mahmoud’s Developing Middleware in Java EE 8 to use Java features such as JAX-RS, EJBs, and JPAs for building powerful middleware for newer architectures such as the cloud. This book can help you become an expert in developing middleware for a variety of applications.

Building an Event-Driven Reactive Asynchronous System with Spring Boot and Kafka

Building an event-driven Reactive Asynchronous System

Spring Boot provides a new strategy for application development with the Spring Framework. It enables you to focus only on the application’s functionality rather than on Spring meta configuration, as Spring Boot requires minimal to zero configuration in the Spring application.

This article will show you how to build a sample project that demonstrates how to create a real-time streaming application using event-driven architecture, Spring Cloud Stream, Spring Boot, Apache Kafka, and Spring Netflix Eureka.

Architecture

Here, Netflix Hystrix has been used to implement the circuit breaker pattern and the API Gateway proxy has been configured using Netflix Zuul.

To get started, create an application with three microservices: Account, Customer, and Notification. Whenever you create a customer record or create an account for a customer, a notification service sends an email and a mobile notification.

All three decoupled services—Account, Customer, and Notification—are independently deployable applications. The Account service can be used to create, read, update, and delete customer accounts. It also sends a message to the Kafka topic when a new account is created.

Similarly, the Customer service is used to create, read, update, and delete a customer in the database. It sends a message to the Kafka topic when a new customer is created, and the Notification service sends email and SMS notifications. The Notification service listens on topics from incoming customer and account messages and then processes these messages by sending notifications to the given email and mobile.

The Account and Customer microservices have their own H2 database, and the Notification service uses MongoDB. In this application, you’ll use the Spring Cloud Stream module to provide abstract messaging mechanisms; it is a framework for building event-driven microservice applications.

This example also uses the edge service for API Gateway using Netflix Zuul. Zuul is a JVM-based router and is also used as server-side load balancer by Netflix. Spring has a strong bonding with Netflix Zuul and provides a Spring Cloud Netflix Zuul module.

Introducing Spring Cloud Streaming

Spring Cloud Stream is a framework for building message-driven microservice applications. It abstracts away the message producer and consumer code from message broker-specific implementations. Spring Cloud Stream provides input and output channels for servicing communications to the outside world. It provides the message broker’s connectivity to the Spring Cloud Stream. Message brokers, such as Kafka and RabbitMQ, can be easily added by injecting a binding dependency to the application code.

Here’s the Maven dependency for Spring Cloud Stream:

<dependency>

<groupId>org.springframework.cloud</groupId>

<artifactId>spring-cloud-stream-reactive</artifactId>

</dependency>

In the above Maven dependency, you have the Spring Cloud Stream dependency reactive model. Now to enable the application to connect with the message broker, use the following code:

@EnableBinding(NotificationStreams.class)

public class StreamsConfig {

}

Here, the @EnableBinding annotation is used to enable connectivity between the application and message broker. This annotation takes one or more interfaces as parameters; in this case, you have passed the NotificationStreams interface as a parameter:

public interface NotificationStreams {

String INPUT = "notification-in";

String OUTPUT = "notification-out";

@Input(INPUT)

SubscribableChannel subscribe();

@Output(OUTPUT)

MessageChannel notifyTo();

}

As you can see, the interface declares input and/or output channels. This is your custom interface in this example, but you can also use other interfaces provided by Spring Cloud Stream:

  • Source: This interface can be used for an application that has a single outbound channel
  • Sink: This interface can be used for an application that has a single inbound channel
  • Processor: This interface can be used for an application that has both an inbound and an outbound channel

In the preceding code, the @Input annotation is used to identify an input channel. Using this identifier, it receives a message, which enters the application. Similarly, the @Output annotation is used to identify an output channel; using this identifier, published messages leave the application.

The @Input and @Output annotations take the name parameter as a channel name; if a name is not provided, then the name of the annotated method will be used by default. In this application, Kafka is used as a message broker.

Adding Kafka to your application

Apache Kafka is a publish-subscribe-based high-performance and horizontally scalable messaging platform. Developed by LinkedIn, it is fast, scalable, and distributed by design. Spring Cloud Stream supports binder implementations for Kafka and RabbitMQ. First, you have to install Kafka in your machine.

Installing and running Kafka

Download Kafka from https://kafka.apache.org/downloads and untar it using the following commands:

> tar -xzf kafka_2.12-1.1.0.tgz

> cd kafka_2.12-1.1.0

Now start ZooKeeper and Kafka on Windows:

> bin\windows\zookeeper-server-start.bat configzookeeper.properties

> bin\windows\kafka-server-start.bat configserver.properties

You can start ZooKeeper and Kafka on Linux using the following commands:

> bin/zookeeper-server-start.sh config/zookeeper.properties

> bin/kafka-server-start.sh config/server.properties

After starting Kafka on your machine, add the Kafka Maven dependency in your application:

<dependency>

<groupId>org.springframework.cloud</groupId>

<artifactId>spring-cloud-stream-binder-kafka</artifactId>

</dependency>

<dependency>

<groupId>org.springframework.cloud</groupId>

<artifactId>spring-cloud-stream-binder-kafka-streams</artifactId>

</dependency>

Here, Spring Cloud Stream and Kafka binder is already added. After adding these dependencies, set the configuration properties for Kafka.

Configuration properties for Kafka

Here is the application.yml configuration file for a microservice:

spring:

  application:

    name: customer-service

  cloud:

    stream:

      kafka:

        binder:

          brokers:

          - localhost:9092

      bindings:

          notification-in:

            destination: notification

            contentType: application/json

          notification-out:

            destination: notification

            contentType: application/json

This file configures the address of the Kafka server to connect to, and the Kafka topic used for both the inbound and outbound streams in your code. The contentType properties tell Spring Cloud Stream to send or receive your message objects as strings in the streams.

Service used to write to Kafka

The following service class is responsible for writing to Kafka in your application:

@Servicepublic class NotificationService {



 private final NotificationStreams notificationStreams;



 public NotificationService(NotificationStreams notificationStreams) {

  super();

  this.notificationStreams = notificationStreams;

 }



 public void sendNotification(final Notification notification) {

  MessageChannel messageChannel = notificationStreams.notifyTo();

  messageChannel.send(MessageBuilder.withPayload(notification)

    .setHeader(MessageHeaders.CONTENT_TYPE, MimeTypeUtils.APPLICATION_JSON)

    .build());

 }

}

The sentNotification() method uses an injected NotificationStreams object to send messages represented by the Notification object in your application. Now look at the following Controller class, which will trigger sending the message to Kafka.

Rest API controller

Here’s a Rest Controller class that you can use to create a REST API endpoint. This controller will trigger sending a message to Kafka using the NotificationService Spring Bean:

@RestController

public class CustomerController {

...

@Autowired

CustomerRepository customerRepository;

@Autowired

AccountService accountService;

@Autowired

NotificationService notificationService;

@PostMapping(value = "/customer")

public Customer save (@RequestBody Customer customer){

Notification notification = new Notification("Customer is created", "admin@dineshonjava.com", "9852XXX122");

notificationService.sendNotification(notification);

return customerRepository.save(customer);

}

...

...

}

The preceding Controller class of Customer service has a dependency with NotificationService. The save() method is responsible for creating a customer in the corresponding database; it creates a notification message using the Notification object and sends it to Kafka using the sendNotification() method of NotificationService. Here’s another side of how Kafka listens to this message using the topic name notification.

Listening to a Kafka topic

Create a listener NotificationListener class that will be used to listen to messages on the Kafka notification topic and send email and SMS notifications to the customer:

@Component

public class NotificationListener {

@StreamListener(NotificationStreams.INPUT)

public void sendMailNotification(@Payload Notification notification) {

System.out.println("Sent notification to email: "+notification.getEmail()+" Message: "+notification.getMessage());

}

@StreamListener(NotificationStreams.INPUT)

public void sendSMSNotification(@Payload Notification notification) {

System.out.println("Notified with SMS to mobile: "+notification.getMobile()+" Message: "+notification.getMessage());

}

}

The NotificationListener class has two methods: sendMailNotification() and sendSMSNotification(). These methods will be invoked by Spring Cloud Stream with every new Notification message object on the Kafka notification topic. These methods are annotated with @StreamListener. This annotation makes the method listener receive events for stream processing.

This article doesn’t have the complete code for this event-driven application; you can find the complete code in the GitHub repository at https://github.com/PacktPublishing/Mastering-Spring-Boot-2.0.

Now run this application to test how the event-driven microservice works. First, ensure that you run Kafka and Zookeeper. The Kafka server will be run at http://localhost:9092.

Now run EurekaServer, ApiZuulService, AccountService, CustomerService, and NotificationService. Open Eureka dashboard on the browser:

All services are running now; create a Customer object to trigger the event to Kafka. Here, Postman is used as a REST client. See the following diagram, where you created a new customer using the http://localhost:8080/api/customers/customer API endpoint through Zuul API Gateway:

You have entered a new customer record in the database. Whenever a new customer is created, it will trigger a message to Kafka to send email and SMS notifications using the Notification microservice. See the following console output of the Notification microservice:

You have created a new customer using the Customer service, which will trigger a notification to be sent to the customer using the Kafka broker. It is a message-driven asynchronous call. Similarly, whenever you create an account record for a new customer, Kafka will listen for another new notification message for the account creation:

Now verify the console of the Notification microservice:

You have successfully created an account record for the customer, which has triggered a message to Kafka to send email and SMS notifications to the customer. You can check the customer record for this customer by visiting http://localhost:8080/api/customers/customer/2001:

As you can see, the customer has complete information including an associated account object.

You’ve now learned to create an event-driven microservice using the Spring Cloud Stream, Kafka Event Bus, Spring Netflix Zuul, and Spring Discovery services.

If you found this article interesting, you can explore Dinesh Rajput’s Mastering Spring Boot 2.0 to learn how to develop, test, and deploy your Spring Boot distributed application and explore various best practices. This book will address challenges related to power that come with Spring Boot’s great configurability and flexibility.

Creating Queues for In-Order Executions With JavaScript by using WeakMap()

A queue is a programming construct that bears a heavy resemblance to real-world queues, for example, a queue at the movie theater, ATMs, or the bank. Queues, as opposed to stacks, are first-in-first-out (FIFO), so whatever goes in first comes out first as well. This is especially helpful when you would like to maintain data in the same sequence in which it flows.

Types of queues

Before you understand queues, take a quick look at the types of queues that you may want to use in your applications:

  • Simple queue: In a simple FIFO queue, the order is retained and the data leaves in the same order in which it comes in
  • Priority queue: A queue in which the elements are given a predefined priority
  • Circular queue: Similar to a simple queue, except that the back of the queue is followed by the front of the queue
  • Double ended queue(Dequeue): Similar to the simple queue but can add or remove elements from either the front or the back of the queue

Implementing APIs

Implementing an API is never as easy as it seems. When making generic classes, you can never predict what kind of situation your queue is going to be used in. Some of the most common operations that you can add to the queue are as follows:

  • add(): Pushes an item to the back of the queue
  • remove(): Removes an item from the start of the queue
  • peek(): Shows the last item added to the queue
  • front(): Returns the item at the front of the queue
  • clear(): Empties the queue
  • size(): Gets the current size of the queue

Creating a queue

Of the four types of queues discussed earlier, this article will teach you to implement a simple and priority queue.

A simple queue

To create a queue, use the following steps:

  1. Define a constructor():
class Queue {

    constructor() {



    }

}
  1. Use WeakMap()for in-memory data storage:
const qKey = {};

    const items = new WeakMap();



    class Queue {

        constructor() {



        }

    }
  1. Implement the methods described previously in the API:
var Queue = (() => {

    const qKey = {};

    const items = new WeakMap();

    class Queue {
        constructor() {
            items.set(qKey, []);
        }
        add(element) {
            let queue = items.get(qKey);
            queue.push(element);
        }
        remove() {
            let queue = items.get(qKey);
            return queue.shift();
        }
        peek() {
            let queue = items.get(qKey);
            return queue[queue.length - 1];
        }
        front() {
            let queue = items.get(qKey);
            return queue[0];
        }
        clear() {
            items.set(qKey, []);
        }
        size() {
            return items.get(qKey).length;
        }
    }
    return Queue;
})();

You need to wrap the entire class inside an IIFE because you don’t want to make Queue items accessible from the outside:

Testing a simple queue

To test this queue, you can simply instantiate it and add/remove some items to/from the queue:

var simpleQueue = new Queue();

simpleQueue.add(10);

simpleQueue.add(20);



console.log(simpleQueue.items); // prints undefined



console.log(simpleQueue.size()); // prints 2



console.log(simpleQueue.remove()); // prints 10



console.log(simpleQueue.size()); // prints 1



simpleQueue.clear();



console.log(simpleQueue.size()); // prints 0

As you can see in above code, all the elements are treated the same. Irrespective of the data they contain, elements are always treated in a FIFO fashion. Although that is a good approach, sometimes you may need something more: the ability to prioritize elements that are coming in and leaving the queue.

Priority Queue

A priority queue is operationally similar to a simple queue, that is, they support the same API, but there is a small addition to the data they hold. Along with the element (your data), they can also persist a priority, which is just a numerical value indicating the priority of your element in the queue.

Addition or removal of these elements from the queue is based on priority. You can either have a minimum priority queue or a maximum priority queue, to help establish whether you are adding elements based on increasing priority or decreasing priority. Now let’s see how we can use the add() method in the simple queue:

add(newEl) {
    let queue = items.get(pqkey);
    let newElPosition = queue.length;
    if(!queue.length) {
        queue.push(newEl);
        return;
    }
    for (let [i,v] of queue.entries()) {
        if(newEl.priority > v.priority) {
             newElPosition = i;
             break;
        }
    }
    queue.splice(newElPosition, 0, newEl);
}

Since you are accounting for the priority of the elements while they are being inserted into the stack, you do not have to concern yourself with priority while you remove elements from the queue. So, the remove() method is the same for both simple and priority queues. Other utility methods, such as front(), clear(), peek(), and size(), have no correlation with the type of data that is being saved in the queue, so they remain unchanged as well.

A smart move while creating a priority queue would be to optimize your code and decide whether you would like to determine the priority at the time of addition or removal. That way, you are not over calculating or analyzing your dataset at each step.

Testing a priority queue

First set up the data for testing the queue:

var priorityQueue = new PriorityQueue();



priorityQueue.add({ el : 1, priority: 1});



// state of Queue

// [1]

//  ^



priorityQueue.add({ el : 2, priority: 2});



// state of Queue

// [2, 1]

//  ^



priorityQueue.add({ el : 3, priority: 3});



// state of Queue

// [3, 2, 1]

//  ^



priorityQueue.add({ el : 4, priority: 3});



// state of Queue

// [3, 4, 2, 1]

//     ^



priorityQueue.add({ el : 5, priority: 2});



// state of Queue

// [3, 4, 2, 5, 1]

//           ^

Visually, the preceding steps would generate a queue that looks as follows:

Note how when you add an element with a priority 2 it gets placed ahead of all the elements with priority 1:

priorityQueue.add({ el : 6, priority: 1});



// state of Queue

// [3, 4, 2, 5, 1, 6]

//                 ^

And when you add an element with priority 1 (lowest), it gets added to the end of the queue:

In the above image, by adding the last element apply the lowest priority in the order, which makes it the last element of the queue, thus keeping all the elements ordered based on priority.

Now, remove the elements from the queue:

console.log(priorityQueue.remove());



// prints { el: 3, priority: 3}



// state of Queue

// [4, 2, 5, 1, 6]



console.log(priorityQueue.remove());



// prints { el: 4, priority: 3 }



// state of Queue

// [2, 5, 1, 6]



console.log(priorityQueue.remove());



// prints { el: 2, priority: 2 }



// state of Queue

// [5, 1, 6]



priorityQueue.print();



// prints { el: 5, priority: 2 } { el: 1, priority: 1 } { el: 6, priority: 1 }

There you have it: the creation of simple and priority queue in JavaScript using WeakMap().

If you found this article interesting, you can explore Kashyap Mukkamala’s Hands-On Data Structures and Algorithms with JavaScript to increase your productivity by implementing complex data structures and algorithms. This book will help you gain the skills and expertise necessary to create and employ various data structures in a way that is demanded by your project or use case.