Seed project for creating AngularJS Modules

This article shows you how to start creating a new AngularJS module with re-usable components, directives & Services using a starter/seed project. The seed project can be found here

Table of Contents

Project Structure

The seed project is very simple and the structure is something like this:-

  • seeder
    • file.js
    • metadata-updater.js
    • repo-creator.js
  • src
    • your-project-name.js
  • .eslintrc.js
  • .gitignore
  • bower.json
  • build.conf.js
  • gulpfile.js
  • init.js
  • package.json
  • update-metadata.js

Let us take a look at the various files in the project.


This is your main project file for a NodeJS project. You can think of it being analogous to a .csproj file or .vbproj created with Visual Studio. The main properties here are:-

  • name – This is where you specify the name of the project. I prefer the kebaberize naming convention here.
  • private – If you don’t intend to publish your package on node, set this to true.
  • version – This is the version of your package. A lot of people like to start from 0.x.y. But, I prefer to start from 1.0.0
  • description – This is the description of your project. Keep it simple but meaningful so that most folks can understand it.
  • repository – This is the URL to your repository. Remember, this is the URL to the actual Git (or even SVN) repository. It should not point to an HTML page of your project.
  • license – The type of License for your project. Here’s the full list of all license types.
  • devDependencies – This node packages are needed during the development & build stages only. Here’s the list of node packages we are going to use for the seed project.
    • bower – For using frontend JavaScript packages
    • gulp – This is our task runner that automates various painful tasks that need to be done everytime during the build process.
    • gulp-<plugins> See the Gulp section below for a list of Gulp plugins which are the actual task runners.
    • case – This is used by the metadata-updater.js script to create the angular project name in proper casing.
  • scripts – You can run various scripts during the lifecycle of your package. In our case, we use the postinstall stage to install bower packages and run various tasks using Gulp.

For a full list of all properties available in package.json refer to this documentation.


This is the manifest file for Bower where all the client side JavaScript libraries you want to use in your project must be defined. Here too you must declare the name & description for your project.

  • name – This is where you specify the name of the project. I like to keep it the same as the one defined in package.json using Kebaberize naming convention.
  • description – This is the description of your project. Again, same as what is in package.json.
  • homepage – This should point to your project page in GitHub. Unlike the one in package.json this is not a reference to a Git repository
  • main – This property is used to list the files primarily used in the project. This is what others will use in their projects when using your project as a dependency. As such, it points to our combined JS file in the dist folder (not the minified version though which is meant for production).


Gulp helps automate various tasks needed during the build processes. It is extensible via plugins using which you can do all kinds of things using a pipelining model that is very efficient.


This file contains all of the user settings for the gulp build process.

  • srcJs – This refers to all the source JavaScript files that will be combined & compressed. Wildcards are supported here. Because this is a seed project for AngularJS, I have included patterns for services, components & directives directory. Along with this, the main module file should reside in the root src folder.
  • buildFolder – This is the folder where the output JavaScript file will be written.
  • buildJsFilename – This is the name of the output JavaScript file. All the source files found using the patterns in srcJs will be combined into this output file. Another version of this file which is minified will be generated and saved with the extension .min.js


This file contains Gulp tasks that can be run as part of the build process to perform various tasks such as combining & minifying the source JavaScript files.

It has two tasks:-

  • clean – Clears the output folder.
  • scripts – Combines & minifies the source JavaScript files for use in production.
  • lint – Runs ESLint on our source JavaScript files to make sure that we are adhering to coding standards. Read the section on ESLint for more information.


The ESLint style we are going to use will extend from eslint:recommended and because this is an AngularJS seed project – plugin:angular/johnpapa.


This file contains a list of all folders & files that are to be ignored by git for source control.

It has three entries:-

  • node_modules – For ignoring the NodeJS modules added as dependencies for this project.
  • bower_components – For ignoring the Bower modules this project is dependent upon.
  • dist – This is the output folder that contains the combined & minified JavaScript file. Some people do commit it and publish it to Bower. I am still not sure on what’s the best way for this. Committing the build outputs just does not seem right to me. I’ll think of the implications when I put everything on TeamCity. But, for now, I keep it ignored.


This is the JavaScript file in which your angular module is declared. It declares the name of your module “your-project” along with any dependencies inside the array [].

angular.module("your-project", []);

The readme file for the seed project with instructions on how to use it to create a new project

How do I Start?


Create a new AngularJS Module


  1. NodeJS – This is a JavaScript project after all. So, install Node.JS.
  2. Global Node Packages
    • Bower – Because we are going to be open source JavaScript projects

      npm install -g bower

    • Gulp – This is our task runner which will be doing things like bundling, minification, linting, etc.

      npm install -g gulp

    • ESLint – To maintain code quality we will use ESLint as our JavaScript linter.

      npm install -g eslint

    • mams – The seeder binary I created for AngularJS projects.

      npm install -g mams

    Make sure you have all of these packages installed globally by running:-

    npm list -g --depth=0

    The output should include the following packages:-

    +-- bower@1.8.2
    +-- eslint@4.18.1
    +-- gulp@3.9.1
    +-- mams@1.0.5
  3. Project Name: mx-angular-notify

  4. Project Description: AngularJS module for showing toast notifications
  5. Output JavaScript Filename: angular-notify.js
  6. GitHub Access Token: 72a8a3e2b8374bcb8acaf0d0f7f4a708 (This is just an example. You must generate your own.)


  1. Go to the directory which holds your project. mams will create a directory for your project.

    cd C:\projects

  2. Generate an Access Token so that the init script can create a GitHub repository for you. See the section below to see the steps on how to generate an access token

  3. Initialize your project by giving it a suitable name, description, specifying the name of your compressed/minified JavaScript file and finally providing the access token you generated in the previous step

    mams -p mx-angular-notify -d "AngularJS module for showing toast notifications" -g -t 72a8a3e2b8374bcb8acaf0d0f7f4a708


You should get an output like:-

Maxotek Angular Module Seeder v 1.0.5
Creating GitHub Repository
Listing repositories
Found: 31 repositories
Created project directory: mx-angular-notify
Seeder repository:
Seeder repository cloned at: mx-angular-notify
Repository created at:
Project: mx-angular-notify
Description: AngularJS module for showing toast notifications
Output File: mx-angular-notify.js
Project URL:
mx-angular-notify/package.json updated
mx-angular-notify/bower.json updated
mx-angular-notify/build.conf.js updated
Renamed project file to: mx-angular-notify/src/mx-angular-notify.js
mx-angular-notify/src/mx-angular-notify.js updated
Updated project name in: mx-angular-notify/src/mx-angular-notify.js
mx-angular-notify/.git/config updated

So, now you have an AngularJS module project locally along with a remote GitHub repository that you will be pushing to. Open up that favorite IDE of yours and start building your AngularJS module.

That is it, your new project is ready for development. If you want to go start creating your new AngularJS module. Check out my article on how to Create an AngularJS Service.

Alternatively, you can continue reading the article and understand the nitty gritty details of what’s happening under the hood.

Manual Method

Clone this repository and then change the following entires.


  1. name – This is the name of your project
  2. description – Give your project a meaningful description
  3. repository – The URL to your GitHub repository


  1. name – This is the name of your project
  2. description – Give your project a meaningful description
  3. homepage – The URL to your GitHub project


  1. buildJsFilename – The compressed & minified name of your output JavaScript file. This is what others will include using <script> tags


  1. [remote “origin”] -> url – The SSH URL to your Github project’s repository


  1. Change the module name for your project from the default your-project and add any AngularJS dependencies you need inside the empty array [].

angular.module("your-project", []);

That is how we use the seed project for creating new AngularJS modules.

Amazon Echo Plus Trial Day #1

I just received my Amazon Echo Plus after pre-ordering it on early on. I have been looking for an Echo device for about 6 months now. Initially, I thought about buying it from (the US site). But, I read an article that talked about it being released in India in late 2017. My patience was rewarded as the pricing was great with a 30% discount for the invitation based preorders. I think they did not reach the target numbers and the 30% discount is still on. Grab your Echo device while it lasts!

The packaging was slick, as you would expect from an Amazon original product.

Amazon Echo Package Outer

Amazon Echo Package Inside

The Echo devices don’t have a battery and need to be always plugged in. It makes sense as they are not mobile devices and pretty much always remain in the same spot.

Echo Power Plug


Setting up the device is quite easy. All you have to do is install the mobile app and follow the instructions. Basically, you connect to the WiFi network of Echo when it is in setup mode. From there you select your WiFi network and enter the password. After that, it takes a few minutes to configure (there’s probably a bit of download in there).


You have two buttons on the top, an Action button that does some context sensitive stuff like turning off an alarm or timer. Also, it serves as the button to wake your device. You can also go into setup mode by pressing and holding the button until it goes Orange.

The other button turns the microphone off and the light ring red.

Voice Training

Back in the day when speech recognition was just reaching us PC users (think Dragon naturally speaking), one had to train the software to recognise our voice by reading a 10 minute paragraph. Those days are long gone with the advent of Machine Learning & Artificial Intelligence. Having a wake word and commands help with the voice recognition. Although, the training option is still there. But, that’s a topic for another day. For now, I am quite happy with the results of the default speech recognition profile. I guess the local release has had some training with Indian accent. Also, Alexa’s voice resembles that of an Indian speaker.

Commands & Skills

Date & Time commands work fine. You can ask for them separately. I am not sure if there’s a single command to make Alexa say both at the same time.

Having to shout out the wake word every time is kind of a chore. I wouldn’t mind that if it allowed me to change the wake word to something custom. Yeah you guessed it right, a name I coined for a certain someone.

The weather report is quite comprehensive. Okay I confess falling asleep after the half way point and waking up to find it raining on a clear Monday night forecast. But, that has more to do with the guess work (prediction cough cough) at the base stations in the country.

Alexa remembers her physics lectures and she stays put in the spot you placed her (well, until someone trips her over while cleaning the floor). I would like for her to recite the Pascal’s (or even Rascal’s) law at my speed.

Her reciting my Kindle book was way better than Microsoft Narrator. But, it is still no substitute for an Audible book. Speaking of Audible, it is not available in the Indian version of the Alexa App.

Amazon Prime music is also not up to the mark. I was unable to create playlists out of my iTunes purchases. Initially it wasn’t even able to import from iTunes. But, once you allow iTunes to share it’s library XML via (Edit > Preferences > Advanced > Share iTunes Library XML with other applications), the iTunes import option appears. But, remember it will only be able to import the songs you have purchased and not the ones you have downloaded via your subscription.

I did get a one year extension on my Amazon Prime subscription and there’s some news that Amazon Prime Music will become part of the Prime subscription. I am not sure if this is a future thing or if it is active even right now because I could play country & pop music by asking for Alexa to play music from those Genres. But, if I ask her to play some music she keeps reverting to Bollywood & Indian music which isn’t bad but just the fact that I don’t listen to any Bollywood stuff these days.

I was able to increase and decrease the volume of the speakers via the commands “Turn up/down volume”. She lights up with a white ring showing the current volume level.

That’s all for day 1! Stay tuned for more Echo news in the coming days.

C# Encode URL

There are several ways you can encode URLs in C#. It all depends on what framework you are using.

.NET Core

If you are using .NET Core (either ASP .NET, a Class library or a Console App) or even .NET Standard, you can use one of these two methods:-




ASP .NET Framework

If you are inside an instance method of System.Web.Mvc.Controller, you can use the Server property as follows:-

For any other class, you can use one of these:-



.NET Framework Console/Desktop Application

For the remaining portions of the .NET platform like a .NET Framework Console Application, Class Library, Windows Service & Desktop Application you can use the following:-

On a side note, you can try my online URL Encoder.

Click here to read my blog post on how to decode URL in C#.

64-Bit Computing

Around 2003 the term 64-Bit computing was everywhere. Want to install Windows XP? Great! Do you want the 32-Bit disc or 64-Bit disc? Installing WinZip for your shiny new re-installed OS? Which version of installer do you want to download? 32 Bit or 64 Bit? I want the higher “version”, give me all those 64 Bits! Then, when you try to install the bigger & badder version of WinZip, your poor 32 Bit Operating System is not capable of running the 64 Bit installer. These were the questions and problems everybody was running into back in the day. Today, the term 64-Bit is ubiquitous as even smartphones are 64-Bit these days.

What does 64-Bit computing mean?

Everything in digital computers is around the bits 0 and 1. What that means is, you can represent everything from numbers to text in articles such as these, images of your childhood memories, songs of your favorite artist & hours of all those cat videos you adore using sequence of 0s and 1s.

Here are some examples of binary representations:-

Value Binary Representation
4815162342 100011111000000011000101111100110
Et tu, Brute? 01000101011101000010000001110100011101010010110000100000010000100111001001110101011101000110010100111111

Each of these 0s and 1s are called bits. 8 consecutive bits form a byte. 1024 (2 to the power of 10) bytes make a Kilobyte. 1024 Kilobytes make a Megabyte. 1024 Megabytes make a.. you know the math.

For these bits to be useful, they need to be stored in the memory. Memory can be either primary or secondary. Secondary memory is your permanent storage like the Hard Disk Drive or the fancy new Solid State Drive. If you have been living under a rock, it could also be a CD/DVD drive or the floppy disks from the stone age. Primary memory on the other hand is volatile and gets wiped out on system shutdown/reboot.

These days it is quite common to have primary memory (RAMs) in the range of 8 to 16 GB on PCs and Laptops. Heck, even some crazy Android smartphones ship with 8 Gigs of RAM. Why a phone needs that much amount of memory is a different topic all together.

Each memory location in the RAM has a unique address and all of these addresses have a unique number assigned to them. The CPU uses these memory locations to store and retrieve the bits of data. The smallest amount of data that can be stored is a byte. Although, the fact that a byte is made up of bits means we can still refer to it all as bits. You just have to store 8 of them together at the very minimum.

The maximum amount of RAM that the CPU can read/write from is governed by whether the CPU is 32-Bit, 64-Bit or the “soon” to be 128-Bit. To address each memory location for reading and writing, the CPU needs to keep track of these addresses. This activity requires memory too and this is what is implied by whether your CPU is 32-Bit or 64-Bit. A 32-Bit CPU can support a maximum of 2 ^ 32 memory i.e 4 GB. That is the theoretical limit, though. On Windows, the practical limit was around 3.2 GB. But, there are some 32-Bit Linux kernels that support more than 4 GB of RAM using something called Physical Address Extension (PAE).


So, now that you know what 64-Bit computing is. What are/were the implications of this? Well, first of all, if your system had less than 3.2 GB of RAM there was no point in having a 64-Bit processor. Without having a 64-Bit CPU you could not even install a 64 Bit Operating System.

On Windows, you would have to use a 64-Bit OS if you wanted to make use of RAM beyond 3.2 GB. On Linux, like mentioned before there were ways to circumvent that. But, generally speaking you would go for a 64-Bit Kernel for RAM greater than 4 Gigs.

The next is software compatibility. Programs can be compiled to either 32-Bit or 64-Bit. A 32 Bit machine cannot run a 64 Bit program under any circumstances. While, a 64 Bit machine can run 32 Bit programs. On Windows, the WoW64 subsystem allowed 32-bit programs to run on a 64 Bit OS, albeit at a small performance cost.

The rule of thumb was, if you could find a 64-Bit version of the software, use that on your 64-Bit machine. If not, keep using the 32-Bit version of the software. In the early days, some folks could not even install the 64-Bit OS on their brand new 64 Bit CPU because some application they could not live without did not have a 64 Bit version. Some could not even run on a 64 Bit machine using WoW64 (think anti-virus or other low level software & games).

One thing to note though, is that even a 32-Bit OS can support disks (secondary memory) way greater than 4 GBs.

Putting it all together

So summing it all up. 64 Bit computing implies that you can utilize primary memory greater than 4 GB on your computer. It also means that you possibly have individual programs that require more than 4 Gigs of RAM.

C# Convert Int to Byte Array

An Integer in C# is stored using 4 bytes with the values ranging from -2,147,483,648 to 2,147,483,647. Use the BitConverter.GetBytes() method to convert an integer to a byte array of size 4.

One thing to keep in mind is the endianness of the output. BitConverter.GetBytes returns the bytes in the same endian format as the system. This is most likely little-endian in your case. If you need the output in big-endian format (which is the standard as per RFC1014 3.2) the output byte array needs to be reversed using Array.Reverse().

Here’s the portable version of the entire code that checks the endianness of the system and always returns the byte array in Big-Endian format.

int number;
byte[] bytes = BitConverter.GetBytes(number);
if (BitConverter.IsLittleEndian)
Tagged with:

C# Decode URL

There are several ways you can decode URLs in C#. It all depends on what framework you are using.

.NET Core

If you are using .NET Core (either ASP .NET, a Class library or a Console App) or even .NET Standard, you can use one of these two methods:-




ASP .NET Framework

If you are inside an instance method of System.Web.Mvc.Controller, you can use the Server property as follows:-

For any other class, you can use one of these:-



.NET Framework Console/Desktop Application

For the remaining portions of the .NET platform like a .NET Framework Console Application, Class Library, Windows Service & Desktop Application you can use the following:-

On a side note, you can try my online URL Decoder.

Click here to read my blog post on how to encode URL in C#.

Code Signing Certificates – Why/When to Use

What are code signing certificates?

Code Signing Certificates are used to digitally sign binaries (Executables and DLLs).

Why to sign binaries?

Signing the binaries ensure that the files are from a trusted source (you/your company) and that they have not been tampered by someone else.

What to sign?

You should use them if you are distributing binaries that you built to a customer. These include the executable (EXE) for your application and any libraries (DLLs) you built to modularize the application. Chances are, that you are also creating a windows installer to package your application. You should sign it too.

What do I need for signing?

  • Your binaries
  • A code signing certificate
  • A code signing tool

Timestamps Quirks

Just like an SSL certificate, your code signing certificate has a validity period. If you forget to renew your SSL certificate, browsers will not allow the users to get into your site. Well, unless they are really desperate and bypass the protection. In which case, I would like to know what content you have up there! Thankfully, the certificate providers keep spamming you about the impending expiry and you get a new certificate, put it on your server and every thing is hunky-dory.

The same approach however will not work for code signing. You do not put the certificate/private key with your application. You use it to sign the binaries and embed that information in the binaries itself (it actually modifies your binary). Your certificate might expire 2 years from now, but the binaries must keep on working beyond that. To do this, we utilize a Timestamp server from a trusted authority. This is done during the code signing phase. The signing tool hits the timestamp server and embeds the information in your binary. Congratulations, your application is now Omnitemporal! Operating systems will never warn the user that the code signing certificate has expired, even after the actual expiry date of the code signing certificate has long gone.

Creating and Applying SSL Certificates (Complete Procedure)

I am writing this post in hope that it will help others by saving hours of research in trying to generate and use SSL certificates.

These steps outline the complete activity needed for generating SSL certificates for a web server in Java. For clarity, I have also included the steps that are done by the Certificate Authority. As such, you can follow these steps exactly and experience the role each of them plays.

Activity 1: Creating the Root CA

Step 1: Create a Private Key for the Root CA

openssl genrsa -out ca.key 4096

Step 2: Create the Self-Signed Certificate of the Root CA

openssl req -sha256 -new -x509 -days 1826 -key ca.key -out ca.crt

Root CA Certificates are always self signed. These certificates are valid for a long period, like 20-25 years.

Notice the -sha256 option, it forces the certificate to use the now mandatory SHA-2 instead of the unsecure SHA-1 algorithm which is blocked by browsers

Activity 2: Creating the Intermediate Certificate

Step 1: Create a Private Key for the Intermediate CA

openssl genrsa -out ia.key 4096

Step 2: Generate a Certificate Signing Request (CSR) for the Intermediate CA Certificate

openssl req -new -key ia.key -out ia.csr

Unlike the Root CA Certificate, this one cannot be self signed. It will be signed by the Root CA. The CSR created during this step is used by the Root CA to generate this certificate.

Step 3: Create an extension file for the Intermediate CA Certificate

echo "basicConstraints=CA:TRUE" > ia.ext

This step is needed to give the Intermediate CA the authority to generate certificates for others. If this step is omitted along with the -extfile option in the next step, the browser will display a warning saying that the Intermediate CA is not authorized to generate certificates.

This rule does not apply to the Root CA certificates. It is needed to differentiate between End Entity and Intermediate CA certificates. Without this security in place, all end entity certificates (like for your server) would have been able to generate certificates for others while inheriting the trust from the Root CA.

Step 4: Create the Intermediate CA Certificate signed by the Root CA

openssl x509 -req -sha256 -days 730 -in ia.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out ia.crt -extfile ia.ext

Notice the -extfile option which passes the extensions file from the previous step.

Activity 3: Creating the Server Certificate for your Web Server

Step 1: Create a Private Key for the Server

openssl genrsa -out 4096

Step 2: Generate a Certificate Signing Request (CSR) for the Server Certificate

openssl req -new -key -out

In this case, the Intermediate CA will generate the certificate for our server. So, a CSR needs to be generated and sent to the Intermediate CA.

Step 3: Create the Server Certificate signed by the Intermediate CA

openssl x509 -req -sha256 -days 730 -in -CA ia.crt -CAkey ia.key -set_serial 02 -out

We don’t need the -extfile option or the extensions file in this case. This is because this Server certificate is an End Entity certificate and it should not have the authority to generate certificates.

Activity 4: Using the certificates with JAVA

If you did not use keytool to create the private key and CSR (like in our case), there is a bit of a trick to using these certificates with JAVA. Keytool does not allow you to import the private key directly. This is for security reasons because the private key should never leave the server. You don’t need to provide your private key to the CA for it to generate a certificate for you.

Step 1: Chain Root and Intermediate Certificates

cat ia.crt ca.crt > bundledca.crt

This is very important because the client will only have the Root CA’s certificate in it’s certificate store. All intermediate certificates must be passed by the server to the client.

Step 2: Convert the certificate and private key to the Intermediate PKCS12 format

As keytool lacks (more like intentionally omitted) the ability to import the private key directly, we are going to use the PKCS12 intermediate format. This PKCS12 file will have the private key of our server along with the public keys of our server, the Intermediate CA and the Root CA chained together.

openssl pkcs12 -export -in -inkey -out -name testmaxotek -CAfile bundledca.crt -caname gidia -chain

Step 3: Convert the PKCS12 file to Java Keystore

Finally we convert the PKCS12 file to JAVA’s Keystore format which can then be used by a web server like Tomcat, JBoss.

keytool -importkeystore -deststorepass changeit -destkeypass changeit -destkeystore -srckeystore -srcstoretype PKCS12 -srcstorepass changeit -alias testmaxotek

The important thing here is to match the alias with the one used in the previous step.

Useful Commands

Convert Keystore to PKCS12

keytool -importkeystore -srckeystore test-142.keystore -destkeystore test-142.p12 -deststoretype PKCS12

Export the certificate

openssl pkcs12 -in test-142.p12 -nokeys -out test-142.pem

Export the private key

openssl pkcs12 -in test-142.p12  -nodes -nocerts -out test-142.key