Streams in NodeJS

 
     Streams are flow of data from one part of code to another. NodeJS provides stream module to create streams but that is very cumbersome to use and understand. So we will use an application of stream to demonstrate the power of streams. The fs module we know, uses streams internally. This article shows you why and how to use them.

   Previously we saw about File IO in NodeJS. File IO is a very time consuming task (for a computer). It requires searching a memory location on the disk, read properties, read 1s and 0s, determine what it is, process etc. The File IO methods we saw uptill now work fine when the file is relatively small. Say upto 100 MB. If the file size is large, the IO time becomes large which can halt the system if care is not taken.


Our current model works as shown above. The file we need to process is first brought into the RAM as whole. This takes some finite amount of time (say 20s in the example above). Then this data is processed using some function of ours. This also takes some time (say 4s at 50MB/s speed). So after a user click a button, he would need to wait a total of 24 seconds before he can see anything. This is not desirable.

Say the file contains some article. Instead of using above method, what we can do is that read it line by line or paragraph by paragraph. This way although loading the whole file takes same amount of time, the user can see the first paragraph within a short time (0.12s) and while he is reading that paragraph, the other paragraphs arrive one by one. Hence the user perceives that the system is fast as his wait time is reduced.

Hope this gives you an idea of how streams work.

In NodeJS these streams have event emitters with emit different events, but only few are frequently used namely: 
  • data
  • end
We will use the same myTextFile.txt in our code. You can make that file large so that you can see the effect. 

Reading using Streams
     Change the server.js code to read using stream.


use strict;
var fs = require(fs);
var readable = fs.createReadStream(.\myTextFile.txt);

var string = “”;
readable.on(data, function(chunk){
     string += chunk;
});
readable.on(end, function(){
    console.log(string);
    console.log(Finished reading file);
});


Running this code will give you an output like:

       We first create a readable stream from the myTextFile.txt using the fs module. Whenever a chunk of data is ready to be processed by the stream , the stream emits a ‘data’ event. On every data event we add the current chunk to our string. You can add your processing logic inside this event so that whatever data is appended to the string is processed in chunks. After the stream has finished reading a file, it emits an ‘end’ event. So after the end event is emitted, we display the data to the console. 

Similar to the readable stream, there is a writeable stream This is very helpful when some data is transferred over network and you need to write the data into a file. So instead of waiting for the data to be transmitted completely before writing, you can write the data to the file as soon as it arrives.

Example of writeable stream

use strict;
var fs = require(fs);
var writeable = fs.createWriteStream(.\myTextFile.txt);

writeable.write(This is some new text);
writeable.end();

But you will never in practise use streams for writing files like this. Instead you will a combination of both readable and writeable to read from a  large file and write to another file.

use strict;
var fs = require(fs);
var readable = fs.createReadStream(.\myTextFile.txt);
var writeable = fs.createWriteStream(.\myTextFileCopy.txt);

readable.on(data, function(chunk){
     writeable.write(chunk);
});

readable.on(end, function(){
    console.log(Finished copying);
});

The stream also provides a pipe function to ease the above functionality. The above example can be written like 

use strict;
var fs = require(fs);
var readable = fs.createReadStream(.\myTextFile.txt);
var writeable = fs.createWriteStream(.\myTextFileCopy.txt);
readable.pipe(writeable);


Pipe allows you to directly transfer the chunks from one stream to another.

FileIO in NodeJS

 
      In any system, file input output services are a crucial functionality which governs many things. In NodeJS, file IO is required file serving html pages or when you want your client to download some file or even when you create some other tools with NodeJS apart from web servers. Following are some of the file IO methods which are frequently used by developers.

The examples shown consider the following file structure:

The server.js file will hold our code to execute and myTextFile.txt will contain some message which we will read or write to. For file IO, NodeJS provides fs module which included in the NodeJS core modules. You do not need to do npm install for fs module.

myTextFile.txt
Hello..!! I am inside myTextFile.txt                                                                             

Reading a file
      The fs module provides 2 functions to read files. One method is synchronous i.e. it runs on the main thread (a blocking operation) and the other one is asynchronous. Here is the basic syntax for using the same:

Synchronous Reading

use strict;
const fs = require(fs);
var readFileMainThread = function(){
   var content = fs.readFileSync(.\myTextFile.txt);
   console.log(content.toString());
};
readFileMainThread();

Asynchronous Reading

use strict;
const fs = require(fs);
var readFileAsync = function(){
   fs.readFile(.\myTextFile.txt, function(err, content){
       console.log(content.toString());
   });
};
readFileAsync();

In both the cases, the content returned is in the form of a buffer. To display the actual content you need to convert this into string. Buffers are the exact bits in which the data is stored in the computer -- in the form of 1s and 0s. When you print a buffer to console, it will show some hexadecimal numbers whose binary representation is how the data is stored on the disk. 

Writing a file
    Writing is also similar to reading. The fs module provides a synchronous method as well as an asynchronous method to write something to a file. 

Synchronous Writing

use strict;
const fs = require(fs);
var writeFileMainThread = function(){
   var content = This is my new content to write to a file;
   fs.writeFileSync(.\myTextFile.txt, content);
};
writeFileMainThread();

Asynchronous Writing

use strict;
const fs = require(fs);
var writeFileAsync = function(){
   var content = This content will also be written to a file. But Asynchronously;
   fs.writeFile(.\myTextFile.txt,content, function(err){
       if(err) //Some error has occured while writing
       else //Successful
   });
};
writeFileAsync();

In both the cases, the first argument to function is the file path to which the content is to be written. If the file does not exists then the file is automatically created. The 2nd argument is the content which is to be written. Here we pass the string instead of buffer (as is returned when reading). 

Checking Existence of Files and Folders
    Many a times you are saving some dynamic data into file. In such cases if you try to read a non-existing file, your application will crash. To prevent this, it is a  good practise to first check if a file exists or not. Although fs module provides both synchronous and asynchronous methods to achieve this, the synchronous function is more commonly used. This is how you use it:

use strict;
const fs = require(fs);
var readFileAfterChecking = function(){
    if(fs.existsSync(.\myTextFile.txt)){
       var content = fs.readFileSync(.\myTextFile.txt).toString();
       console.log(content);
    }else{
       console.log(File does not exists);
    }
};
readFileAfterChecking();

The fs.existsSync() function returns a boolean value. true if the file exists else false. The fs.existsSync() function can be used to check the existence of file and folders both. Just pass the folder path as the first argument.

Creating a directory
    Sometimes there is a need to create some directory from your program either in your server or any command line tool. The fs module provides function to create a directory on your disk. Again both synchronous and asynchronous methods are available.

use strict;
const fs = require(fs);
var createDirectorySync = function(){
    if(!fs.existsSync(.\myDirectory)){
       fs.mkdirSync(.\myDirectory);
    }else{
       console.log(Directory already exists);
    }
};
createDirectorySync();

This fs.mkdirSync() function will crash if the same folder already exists. Hence it is advisable to use it only if the directory does not exists.

Please note that for creating a folder its parent folder should exists. i.e. if you are trying to create /folder1/child1 folder, you need to ensure that folder1 exists. fs.mkdir function will not create the whole path if it does not exists.

Reading Contents of a Directory
fs module provides two functions to read the content of a directory. One of them is synchronous which is frequently used with forEach() function of JavaScript. 

Synchronous Method

use strict;
const fs = require(fs);
var readDirectorySync = function(){
    if(fs.existsSync(.\myDirectory)){
       fs.readdirSync(.\myDirectory).forEach(function(content){
           console.log(content);
       });
    }else{
       console.log(Directory does not exists);
    }
};
readDirectorySync();

Asynchronous Method

use strict;
const fs = require(fs);
var readDirectoryAsync = function(){
    if(fs.existsSync(.\myDirectory)){
       fs.readdirSync(.\myDirectory, function(err, contents){
           contents.forEach(function(content){
              console.log(content);
           });
       });
    }else{
       console.log(Directory does not exists);
    }
};
readDirectoryAsync();

The fs.readdirSync() function returns an array containing the name of files and folders which are present in the directory. IT JUST RETURNS THE NAMES. You have to decide if that particular name is of folder or a file. Once we get the array of contents, we iterate over it using forEach() function of JavaScript.

Checking if file or folder
In the previous function we had a problem then fs does not tell us directly which is a directory and which one is a file. For finding out this you need to use another function fs.statSync().


use strict;
const fs = require(fs);
const path = require(path);
var readDirectorySync = function(){
    if(fs.existsSync(.\myDirectory)){
       fs.readdirSync(.\myDirectory).forEach(function(content){
           var stat = fs.statSync(path.join(.\myDirectory, content));
           if(stat.isDirectory()){
               console.log(content + is a directory);
           }else{
               console.log(content + is a file);
           }
       });
    }else{
       console.log(Directory does not exists);
    }
};
readDirectorySync();

The fs.statSync() function returns an object having a function isDirectory(). This returns a boolean true if the passed path is a folder else returns false. The fs.statSync() function requires that you pass the full path of the content you are testing. Else it won’t work.

Deleting a file
The fs module provides a function fs.unlink to delete a FILE (only file) from the disk. Again it has 2 variants synchronous and asynchronous.


use strict;
const fs = require(fs);
var deleteFileAfterChecking = function(){
    if(fs.existsSync(.\myTextFile.txt)){
       fs.unlinkSync(.\myTextFile.txt);
       // File deleted
    }else{
       console.log(File does not exists);
    }
};
deleteFileAfterChecking();

If you try to delete a nonexistent file then the script will crash. So be careful.

Deleting a folder
    The fs module has a functions fs.rmdir which accepts path of the directory to be removed as its argument. This function has two variants - a synchronous and another asynchronous.  Note: This command can only delete empty directories. If the directory is not empty then you need to remove it's contents using fs.unlink or fs.rmdir on child directories. 


use strict;
const fs = require(fs);
var deleteFolderAfterChecking = function(){
    if(fs.existsSync(.\myFolder)){
       fs.rmdirSync(.\myFolder);
       // Folder deleted
    }else{
       console.log(Folder does not exists);
    }
};
deleteFolderAfterChecking();


That is a lot about file IO in NodeJS. 

Global Events in NodeJS

 
       We have seen what events and event emitters are in NodeJS. Also we have seen how to create and use our own events. This article exposes you to some of the predefined events in NodeJS which play a crucial role in your application life cycle. 

Following are some of the global events in NodeJS.

Open server.js file and try the following codes to the working of each of the event. The following events are heard by the process global variable. To register a listener for an event, we use 

process.on(<event name>, liistenerFunction)                                                                             
exit
This is an event which is emitted whenever a NodeJS process is terminating. It can be because the event loop senses that no threads are running or calling process.exit() function explicitly.


"use strict";

var i = 0;
setInterval(function(){
  console.log(i);
  i++;

  if(i==2){
    process.exit();
  }
  
}, 1000);

process.on('exit', function(code){
  console.log("Exiting: "+code);
});


SIGINT
        SIGINT stands for SIGNAL INTERRUPT. This event is triggered whenever the user interrupts a Node process by pressing Ctrl+C or Cmd+C (in Mac). The listener you define here will override the default functionality i.e. stopping the code. That is why we will add a process.exit() inside the listener else the process won’t ever stop by an interrupt.


"use strict";

var i = 0;
setInterval(function(){
  console.log(i);
  i++;
}, 1000);

process.on('SIGINT', function(){
  console.log("Exiting process");
  process.exit(0);
});


It will keep on printing integers unless we interrupt by pressing CTRL+C


On pressing Ctrl+C, our listener is executed which first prints out message defined in our listener. After this the process exists because of process.exit().

warning
       This event is emitted whenever the current NodeJS process emits a warning. You can emit a warning using process.emitWarning() function. This event is used by many database modules whenever some configuration mismatch has happened. You should use this instead of console.log() when developing some modules.


"use strict";

var i = 0;
setInterval(function(){
  console.log(i);
  i++;
  if(i==3){
    process.emitWarning("i has reached 3");
  }
}, 1000);

process.on('warning', function(){
  console.log("warning received");
});


As you can see when i becomes 3,  a warning is emitted which prints (node:22536)Warning: i has reached 3. After this our listener is called which prints warning received. The (node:22536) before the message makes the message stand out from your other logs. The number 22536 is the process id which the OS assigns the NodeJS process. It will be different every time you run this.

uncaughtException
      This event is emitted whenever a function throws an error but is not catched using try-catch block. This is a very powerful event as you can prevent your app from crashing even if an error has occurred. This event allows you to perform some custom function with the error like logging it or sending it to developer team etc. If you catch an error, then this event is not emitted.


"use strict";

var validString = `{"website": "compiletimeerror.com"}`;
var invalidString = `{"website" =  "compiletimeerror.com"}`;

var validJSON = JSON.parse(validString);
var invalidJSON = JSON.parse(invalidString);

process.on('uncaughtException', function(err){
  console.log(err);
});


As you can see, since the invalidString is an invalid JSON object, when you try to parse it into JSON NodeJS throws an error. Since we are not catching this error, it shows up in uncaughtException. Also on uncaught error, the process crashes. Since in a server, we do not want our server to crash on error, we can write the code to restart our server within this uncaughtException event handler.

Now see the same thing with try catch.


"use strict";

var validString = `{"website": "compiletimeerror.com"}`;
var invalidString = `{"website" =  "compiletimeerror.com"}`;

try{
var validJSON = JSON.parse(validString);
var invalidJSON = JSON.parse(invalidString);
}catch(err){
  console.log("Some error");
}

process.on('uncaughtException', function(err){
  console.log(err);
});

In this example we surround the parsing functions within a try catch block. Whenever some error occurs, the try catch block catches it and it prevents server from crashing. In such cases, since the exception or error is handled, the uncaughtException event is not triggered.


As you can see the catch block executes our code and prints “Some Error” instead of the error message.

These are the frequently used process events in NodeJS. There are others also but are less frequently used. You can find more information about them in the official documentation of NodeJS.


Event Loop and Event Emitters in NodeJS

 
      Previously we have seen the thread structure of NodeJS. We saw that there is a single thread which handles client connection and passes on heavy tasks to child threads. In this article we will see how to interact with those threads.

Event Loop
Recapitulate how the concept of threads was used to explain the multi-threaded nature of NodeJS.

“There is a bakery in which customers come and give order to you. You pass on the baking of cakes to different ovens according to their availability. Once a cake is ready you serve it back to the customer who requested it”

In this example, you handle the creation of threads and other process management tasks. In NodeJS terminology, you are the Event Loop. An event loop is a thread which keeps running in background and checks for any other threads which might have been pending. 

In other languages like Ruby on Rails or Django where the server is multithreaded to the client, what happens is that there is no main thread. The whole server is split into many threads but nothing sort of Event Loop.  Imagine it like you go to a bakery, place an order, the shopkeeper with his many ovens, instead of cooking a cake as whole, divides the cake into some parts and bakes them separately and then joins them together to form the complete cake. Although baking a cake may become faster, but he handles only one customer at a time which results in other customers waiting for their turn. 

This is completely opposite of Non-Blocking I/O of NodeJS. This is the reason why NodeJS can handle much larger concurrent connections. It is evident from the following statistics

Source: hashrocket.com
The graph shows the maximum number of concurrent connections that a server built with different technologies can handle. 

Events
      Again let us go back to the NodeJS bakery. We have the clients, event loop and the threads. But did you think of how will you know when a cake is ready? The answer is simple… The oven beeps and alarm or so. Right? This alarm or the sound that the oven makes when the cake is baked is called event in NodeJS.

Events are some signal which tell the main thread that a thread has finished executing and is ready with the result. Events are extensively used in NodeJS due to its non-blocking architecture. All the network interfaces and database management modules emit some sort of event to interact with the main thread.

Now let us create some events.

Create server.js file with the following contents

use strict;

const events = require(events);
const emitter = new events.EventEmitter();
let sayHelloListener = function(){
    console.log(Hello World);
};
emitter.addListener(greet, sayHelloListener);
emitter.emit(greet);

As usual, we need to import the events module to use events. For emitting events we would use the EventEmitter class of events module. We then define a listener. A listener is a piece of code which is executed whenever an event has occurred. That means that whenever the event loop hears the greet event, the sayHelloListener() will be executed. Finally we emit the greet event. You should see something like this

NOTE: Once the event is emitted and executed, the event loop still keeps listening for these events until you remove the listener.

This example shows usage of event in the main thread. Now let us see events from another thread. For this we will use setTimeout() function to execute a function in another thread.


use strict;

const events = require(events);
const emitter = new events.EventEmitter();

let sayHelloListener = function(){
    console.log(Hello World);
};

emitter.addListener(greet, sayHelloListener);

setTimeout(function(){
  emitter.emit('greet');
}, 5000);

console.log(End of script);

Here we emit the event greet in another thread which will initially be paused for 5 seconds.
Initially the output will be


Observe that the program has not exited as the event loop detected that some thread is still executing. Hence the event loop keeps waiting for the thread to finish instead of exiting.

After 5 seconds, the output becomes


This is concept of events. There are various applications of events which you will see in next articles.


Threads In NodeJS

 
        A thread of execution is the smallest sequence of programmed instructions that can be managed independently by a scheduler, which is typically a part of the operating system. Understand the following way:

Imagine you have to bake some cakes. In the first case, you have a single oven for baking. When an order comes for a cake, you put the batter in the oven and set the timer. When the timer reaches zero you take out the cake and serve it. Now imagine that another order comes while the first cake is still cooking, well obviously you will have to wait till the first one is finished baking before you bake the second one. 

This example is similar to a single threaded architecture. A single thread can execute only a single task at any instant of time. If another process need to execute, it has to wait till the thread becomes free.

Now suppose you have 10 ovens. In this case you can bake upto 10 cakes at a time. At it is probable that all 10 of them are not used at the same time. This is how multi-threaded architecture works

That is about threads. But what is NodeJS? Single threaded or multithreaded?
Again go to bakery. Whenever a customer comes, does he put the cake in the over? No! right?... A customer gives his/her order to you who then looks for free ovens and starts baking. This is the model followed by NodeJS. 

NodeJS has a main process which is responsible for communicating with the clients, taking instructions from the customers, uses any free threads from its internal thread pool, fetches result from threads and send the result back to the customer. 

Now let a customer enter your bakery and give some task to you which would take you around 5 minutes to complete. During these 5 minutes, since you are busy executing some task you won’t be able to handle other customers or assign new processes. This would cause a delay for the customer who may leave after frustration.  

Similarly if you keep the NodeJS’ main thread busy, it will cause delay in responding to the clients. This is a bad design as any server should try to minimise the response delay. Hence it is a good practice to execute long time consuming tasks in child threads.

See the working of threads in the following example.

Hello World Application 

In this example, we will display the contents of a file to console. For this we will be using ‘fs’ module of NodeJS which enables us to perform file I/O.

Create a new project and create a new file server.js and myTextFile.txt with the following content:

server.js


use strict;
const fs = require(fs);
const path = require(path);

var sayHello = function(){
  var str = fs.readFileSync(path.join(__dirname, myTextFile.txt)).toString();
  console.log(str);

  console.log(Finished reading file);
};

sayHello();



myTextFile.txt
Hello..!! I am inside myTextFile.txt                                                                                                         

Here the ‘path’ module is used for specifying the address of the text file independent of the operating system. 

When you execute this program, you should see something like this

As expected.No surprised here.
Now let us read the file in another thread.

Modify the sayHello() function above as 

use strict;
const fs = require(fs);
const path = require(path);

var sayHello = function(){
  fs.readFile(path.join(__dirname, myTextFile.txt), function(err, buffer){
     var str = buffer.toString();
     console.log(str);
  });

  console.log(Finished reading file);
};

sayHello();

Now run and see the output.

Unexpected??????? You might be wondering why the second console.log() is executed first. Well that is where threads come into play. 

The code you run is always on the main thread. So when you call sayHello() function, you are calling it in the main thread. Now in the sayHello() function, the first statement is executed as readFile BUT this function is executed in a separate thread. This fs.readFile() handles reading file in a separate thread. The logic is defined in the core NodeJS modules. You pass a callback function as an argument to readFile()  function which is executed after the reading of file is complete. 

fs.readFile() executes in another thread as said above BUT the outer console.log() executes in the main thread. That is why it executes while the other thread is processing. 

In the next article we will show how to create another thread manually.


Callbacks concept in NodeJS

 
     One of the most confusing concept in programming is used in NodeJS - the Callbacks. Even after being a complex component, callbacks are somehow used in other programming languages as well. This article exposes you to the concept of Callbacks. 

Let us understand with the help of another simple application.

Hello World Broadcast
     Till now all the code we saw ran instantly. But if a situation arises where you would need to execute some function after certain delay, they we would have to pause the execution of our program. Let us say we want to display a message every second. Then what you would need to do is to pause the program for exactly 1 second before again printing the message. 

One way to do this is to run a for loop with a large counter. If your computer executes 1 instruction in a microsecond, then you would need to run a loop having 10,00,000 iteration. But this would be specific to your machine and won’t work as expected in different machines. 

The above functionality can be achieved by using setTimeout() or setInterval() method of JavaScript. Here is how would you do this:

server.js

use strict
const message = Hello World;

setInterval( function(){
  console.log(message);
}, 1000);


Hey Hey…!! what is this “use strict”; on top? and how can you pass a function as an argument to another function?? 

The “use strict”; on top of the file ensures that all JavaScript errors are properly caught. If your code doesn’t include this line, the you can use undeclared variables in your code. 

For those who are coming from C, C++, Python or such languages, passing a function as an argument would seem like a mistake. But this is a main part  of functional programming. JavaScript allows you to pass an entire function as an argument. This function which is passed as an argument into another function is called a Callback. This allows you to call the callback function from within another function without having to import any modules or libraries. 

On running this you should see something like this:


This same thing is available in your browser’s developer console. 
Open your developer console by Ctrl+Shift+i. You can execute javascript codes in this console.

Try the following code in the console.


var a = function(callback){
           console.log(12345);
           callback();
       };

a(function(){
  console.log(Hello World);
});

 Here you can see that when you call a(..) , it first prints 12345 and after printing this, the a() function calls the callback function. It is the callback function which prints Hello World to the console.


You can even pass a previously declared function as a callback.

Example:

server.js


use strict;
var a  = function(callback){
  console.log(Inside function a);
  callback();
};

var b = function(){
  console.log(Inside function b);
};

a(b);


Here we declare two functions a and b which prints “Inside function a” and “Inside function b” respectively. Then we pass the already declared function b() as an argument to a(). Observe that when passing a function as callback, we only need to use the function name. 

i.e. use a(b); and not a( b() );


Of course you can pass multiple callback functions.

use strict;
var a  = function(callback, callback1){
  console.log(Inside function a);
  callback1();
  callback();
};

var b = function(){
  console.log(Inside function b);
};

var c = function(){
  console.log(Inside function c);
};

a(b, c);



As you can see, the order in which the callback functions are called in the function a() matters, not the order in which the functions are passed as arguments. Even though we are passing the function b() as the first argument, but callback1() is called before callback() inside the a() function. That is why c() function is called before b().

This concept of callbacks is very heavily used in NodeJS. Its use will be demonstrated in the article about Threads in NodeJS.


Importance of Private Constructors in Java

 
     New and young java learners are quite happy with using public constructors. The concept of having private constructors are quite a hazy idea to them as the real case scenarios of using it is not neatly presented to them. This article brings the concept of private constructors live with code examples and real life use cases.

      We already know that to create an instance of a class, a constructor must be called and in real life cases, the constructor of a class is called outside of the said class. Hence naturally, constructors, most of the time, are declared as public. 

       Let us discuss what would be the effect of declaring a constructor as private. It is simple, it cannot be called outside of the class, and hence an object creation is not possible straight way. But this uncomfortable situation can be wisely utilized to have more control on Object Creation Mechanism. There are plenty of real life situations where the application unit, by virtue of its requirements, might have a maximum numbers (say N) of objects allowed. The class that represents this application unit must have control logic in its code so that no one outside of this class can create more than N numbers of object. If the constructor of this class is declared as public, then the control is gone because any application outside this class can just create object as many times as it wants. In this situation when you need more sophisticated control over object creation, declaring constructors as private will come to you handy.

But there must be a way to access the private constructor out of the class and intuitively, that would be the best place to write the control logic. Yes, this can be done via a static method.  

Let us look into the following code



          The code illustrated above represents a class where the maximum no of allowable object is 2. So we need to ensure that no other application can create 3 or more number of objects out of the class PrivConstr.  To have this control, at the very first, the constructor is made private; this ensures that the freedom of object creation out of the class PrivConstr is withheld and then a static method createObject is written to access this private constructor. The logic of control mechanism is executed at createObject method itself.

         A static final data MAX_ALLOWABLE_INSTANCE has been taken into the class which will store the max count (here in this code it is 2) and a static data OBJ_CNT has been taken which will keep track of count of objects already created. This OBJ_CNT will be incremented whenever the PrivConstr class will be instantiated. In the crerateObject method, before instantiating any object by calling the private constructor, it is to be checked whether the OBJ_CNT has reached the MAX_ALLOWABLE_INSTANCE or not. If OBJ_CNT has already reached the MAX_ALLOWABLE_INSTANCE value, then the private constructor will not be called, hence there will be no object creation. Thus the code for limiting object creation for predefined number of times only, resides within the class itself.

         The code above reminds us that to create any object out of PrivConstr class, a constructor cannot be called directly, rather static method createObject must be called which in turn, with its logical stand point can call the private constructor to create an object.

Let us test the class with a DemoPrivConstr class as follows
 
         Here the PrivConstr class has been instantiated for three times and there are three if blocks to test whether the object creation has succeeded or not. When obj1 is tried to be created, the static data OBJ_CNT has incremented from 0 to 1 and private constructor is called successfully resulting successful creation of obj1. Then when obj2 is tried to be created, as the OBJ_CNT has not reached the  MAX_ALLOWABLE_INSTANCE  value, once again the private constructor has been called to result the successful creation of obj2; but this time the same the static data OBJ_CNT has been incremented to 2 and thus OBJ_CNT has reached the MAX_ALLOWABLE_INSTANCE value. Hence when obj3 is tried to be created, as   OBJ_CNT has already reached the MAX_ALLOWABLE_INSTANCE value, the private constructor will not be called, so there will be no object creation and as per the code, obj3 will be null.

The output of the code looks like this



The output is as expected. It shows that obj1 and obj2 have been successfully created but obj3 was not created as the class PrivConstr was set not to allow three or more objects to be created out of it.
So if you need to have more control on allowing any prefixed numbers of objects only to be created, the private constructor is a handy tool to be used along with a static method for its easy access.
Now in Java, there is a pretty common notion of Singleton classes. Singleton classes are those which allow only one object to be created out of it. Hence Singleton classes can be designed like this; just the value of MAX_ALLOWABLE_INSTANCE has to be changed to 1. 


Array Length in Java: An interesting insight

 
       Java programmers use .length with arrays like anything, mostly remaining unaware of what does .length mean. Is it a method or an object or anything else? This article explores .length in details with all its peculiarities and specialities.
In Java, array is not a class, there are classes as  java.lang.reflect.Array and  java.util.Arrays  but unfortunately, java designers have decided not to design any class for array that we readily understand. 
So if we write
int arr[] = new int[5];

Then obviously, arr as coded above is not an object of any class. As per Oracle documentation “An array is a container object that holds a fixed number of values of a single type”.  Here it should be noted that the phrase “container object” as said in Oracle documentation, does not represent instance of class. So as arr is not an object, .length is obviously not a method (it does not have any braces either!). 
The code below shows that indeed arrays in Java are not at all java classes

public class ArrayLengthExplorer {

 public static void main(String[] args) {
  int arr[] = new int[5];
  Class c = arr.getClass();
  try {
   System.out.println("Class Name:" + c.getName());
  } catch (SecurityException e) {
   // TODO Auto-generated catch block
   e.printStackTrace();
  }
 }
}

The output of the code yields the following
Class Name:[I


Now what is that “[I” ? It stands for the run-time type signature of an array with component type int

Now it is indeed not trivial to think for a container object that represents an array. Indeed array is something “special” in java, because it has its own predefined bytecode instructions where there is a field called “arraylength”, so .length is a programmatic manifestation of that “arraylength” field of bytecode instruction.

Here it has to be noted that this field “arraylength” has nothing to do with any field of the class object of type java.lang.Class for that array, rather there exists no field called length or arraylength. You can check it through the following code snippet


float arr[] = new float[5];
Class c = arr.getClass();
try {
 System.out.println(c.getField("length"));
} catch (NoSuchFieldException e)
{
e.printStackTrace();
} catch (SecurityException e)
{
 e.printStackTrace();
}

The above code will give the output as :

java.lang.NoSuchFieldException: length
at java.lang.Class.getField(Unknown Source)
at com.stncodes.ArrayLengthExplorer.main(ArrayLengthExplorer.java:9)

So we can think that arrays are treated specially in java, where length is basically a predefined field of its corresponding predefined bytecode instruction.

But the confusion should creep up with the oracle documentation saying “The public final field length, which contains the number of components of the array.” It primarily seems that length is some real fields of some real class, but it should be realized from other angles. Length of an array is public because it has to be accessed from everywhere, and it behaves like an immutable entity. In java the content of the array can be altered but the length of the array is fixed. So the official oracle documentation has to be understood through its spirit not through its literary meaning.

Hence as a final conclusion, the length of an array that is .length is a field of the predefined bytecode instruction that gets directly compiled from the source without any class signature and that field is immutable in nature.


Popular Posts