Amazon Echo Plus Trial Day #1

I just received my Amazon Echo Plus after pre-ordering it on early on. I have been looking for an Echo device for about 6 months now. Initially, I thought about buying it from (the US site). But, I read an article that talked about it being released in India in late 2017. My patience was rewarded as the pricing was great with a 30% discount for the invitation based preorders. I think they did not reach the target numbers and the 30% discount is still on. Grab your Echo device while it lasts!

The packaging was slick, as you would expect from an Amazon original product.

Amazon Echo Package Outer

Amazon Echo Package Inside

The Echo devices don’t have a battery and need to be always plugged in. It makes sense as they are not mobile devices and pretty much always remain in the same spot.

Echo Power Plug


Setting up the device is quite easy. All you have to do is install the mobile app and follow the instructions. Basically, you connect to the WiFi network of Echo when it is in setup mode. From there you select your WiFi network and enter the password. After that, it takes a few minutes to configure (there’s probably a bit of download in there).


You have two buttons on the top, an Action button that does some context sensitive stuff like turning off an alarm or timer. Also, it serves as the button to wake your device. You can also go into setup mode by pressing and holding the button until it goes Orange.

The other button turns the microphone off and the light ring red.

Voice Training

Back in the day when speech recognition was just reaching us PC users (think Dragon naturally speaking), one had to train the software to recognise our voice by reading a 10 minute paragraph. Those days are long gone with the advent of Machine Learning & Artificial Intelligence. Having a wake word and commands help with the voice recognition. Although, the training option is still there. But, that’s a topic for another day. For now, I am quite happy with the results of the default speech recognition profile. I guess the local release has had some training with Indian accent. Also, Alexa’s voice resembles that of an Indian speaker.

Commands & Skills

Date & Time commands work fine. You can ask for them separately. I am not sure if there’s a single command to make Alexa say both at the same time.

Having to shout out the wake word every time is kind of a chore. I wouldn’t mind that if it allowed me to change the wake word to something custom. Yeah you guessed it right, a name I coined for a certain someone.

The weather report is quite comprehensive. Okay I confess falling asleep after the half way point and waking up to find it raining on a clear Monday night forecast. But, that has more to do with the guess work (prediction cough cough) at the base stations in the country.

Alexa remembers her physics lectures and she stays put in the spot you placed her (well, until someone trips her over while cleaning the floor). I would like for her to recite the Pascal’s (or even Rascal’s) law at my speed.

Her reciting my Kindle book was way better than Microsoft Narrator. But, it is still no substitute for an Audible book. Speaking of Audible, it is not available in the Indian version of the Alexa App.

Amazon Prime music is also not up to the mark. I was unable to create playlists out of my iTunes purchases. Initially it wasn’t even able to import from iTunes. But, once you allow iTunes to share it’s library XML via (Edit > Preferences > Advanced > Share iTunes Library XML with other applications), the iTunes import option appears. But, remember it will only be able to import the songs you have purchased and not the ones you have downloaded via your subscription.

I did get a one year extension on my Amazon Prime subscription and there’s some news that Amazon Prime Music will become part of the Prime subscription. I am not sure if this is a future thing or if it is active even right now because I could play country & pop music by asking for Alexa to play music from those Genres. But, if I ask her to play some music she keeps reverting to Bollywood & Indian music which isn’t bad but just the fact that I don’t listen to any Bollywood stuff these days.

I was able to increase and decrease the volume of the speakers via the commands “Turn up/down volume”. She lights up with a white ring showing the current volume level.

That’s all for day 1! Stay tuned for more Echo news in the coming days.

C# Encode URL

There are several ways you can encode URLs in C#. It all depends on what framework you are using.

.NET Core

If you are using .NET Core (either ASP .NET, a Class library or a Console App) or even .NET Standard, you can use one of these two methods:-




ASP .NET Framework

If you are inside an instance method of System.Web.Mvc.Controller, you can use the Server property as follows:-

For any other class, you can use one of these:-



.NET Framework Console/Desktop Application

For the remaining portions of the .NET platform like a .NET Framework Console Application, Class Library, Windows Service & Desktop Application you can use the following:-

On a side note, you can try my online URL Encoder.

Click here to read my blog post on how to decode URL in C#.

64-Bit Computing

Around 2003 the term 64-Bit computing was everywhere. Want to install Windows XP? Great! Do you want the 32-Bit disc or 64-Bit disc? Installing WinZip for your shiny new re-installed OS? Which version of installer do you want to download? 32 Bit or 64 Bit? I want the higher “version”, give me all those 64 Bits! Then, when you try to install the bigger & badder version of WinZip, your poor 32 Bit Operating System is not capable of running the 64 Bit installer. These were the questions and problems everybody was running into back in the day. Today, the term 64-Bit is ubiquitous as even smartphones are 64-Bit these days.

What does 64-Bit computing mean?

Everything in digital computers is around the bits 0 and 1. What that means is, you can represent everything from numbers to text in articles such as these, images of your childhood memories, songs of your favorite artist & hours of all those cat videos you adore using sequence of 0s and 1s.

Here are some examples of binary representations:-

Value Binary Representation
4815162342 100011111000000011000101111100110
Et tu, Brute? 01000101011101000010000001110100011101010010110000100000010000100111001001110101011101000110010100111111

Each of these 0s and 1s are called bits. 8 consecutive bits form a byte. 1024 (2 to the power of 10) bytes make a Kilobyte. 1024 Kilobytes make a Megabyte. 1024 Megabytes make a.. you know the math.

For these bits to be useful, they need to be stored in the memory. Memory can be either primary or secondary. Secondary memory is your permanent storage like the Hard Disk Drive or the fancy new Solid State Drive. If you have been living under a rock, it could also be a CD/DVD drive or the floppy disks from the stone age. Primary memory on the other hand is volatile and gets wiped out on system shutdown/reboot.

These days it is quite common to have primary memory (RAMs) in the range of 8 to 16 GB on PCs and Laptops. Heck, even some crazy Android smartphones ship with 8 Gigs of RAM. Why a phone needs that much amount of memory is a different topic all together.

Each memory location in the RAM has a unique address and all of these addresses have a unique number assigned to them. The CPU uses these memory locations to store and retrieve the bits of data. The smallest amount of data that can be stored is a byte. Although, the fact that a byte is made up of bits means we can still refer to it all as bits. You just have to store 8 of them together at the very minimum.

The maximum amount of RAM that the CPU can read/write from is governed by whether the CPU is 32-Bit, 64-Bit or the “soon” to be 128-Bit. To address each memory location for reading and writing, the CPU needs to keep track of these addresses. This activity requires memory too and this is what is implied by whether your CPU is 32-Bit or 64-Bit. A 32-Bit CPU can support a maximum of 2 ^ 32 memory i.e 4 GB. That is the theoretical limit, though. On Windows, the practical limit was around 3.2 GB. But, there are some 32-Bit Linux kernels that support more than 4 GB of RAM using something called Physical Address Extension (PAE).


So, now that you know what 64-Bit computing is. What are/were the implications of this? Well, first of all, if your system had less than 3.2 GB of RAM there was no point in having a 64-Bit processor. Without having a 64-Bit CPU you could not even install a 64 Bit Operating System.

On Windows, you would have to use a 64-Bit OS if you wanted to make use of RAM beyond 3.2 GB. On Linux, like mentioned before there were ways to circumvent that. But, generally speaking you would go for a 64-Bit Kernel for RAM greater than 4 Gigs.

The next is software compatibility. Programs can be compiled to either 32-Bit or 64-Bit. A 32 Bit machine cannot run a 64 Bit program under any circumstances. While, a 64 Bit machine can run 32 Bit programs. On Windows, the WoW64 subsystem allowed 32-bit programs to run on a 64 Bit OS, albeit at a small performance cost.

The rule of thumb was, if you could find a 64-Bit version of the software, use that on your 64-Bit machine. If not, keep using the 32-Bit version of the software. In the early days, some folks could not even install the 64-Bit OS on their brand new 64 Bit CPU because some application they could not live without did not have a 64 Bit version. Some could not even run on a 64 Bit machine using WoW64 (think anti-virus or other low level software & games).

One thing to note though, is that even a 32-Bit OS can support disks (secondary memory) way greater than 4 GBs.

Putting it all together

So summing it all up. 64 Bit computing implies that you can utilize primary memory greater than 4 GB on your computer. It also means that you possibly have individual programs that require more than 4 Gigs of RAM.

C# Convert Int to Byte Array

An Integer in C# is stored using 4 bytes with the values ranging from -2,147,483,648 to 2,147,483,647. Use the BitConverter.GetBytes() method to convert an integer to a byte array of size 4.

One thing to keep in mind is the endianness of the output. BitConverter.GetBytes returns the bytes in the same endian format as the system. This is most likely little-endian in your case. If you need the output in big-endian format (which is the standard as per RFC1014 3.2) the output byte array needs to be reversed using Array.Reverse().

Here’s the portable version of the entire code that checks the endianness of the system and always returns the byte array in Big-Endian format.

int number;
byte[] bytes = BitConverter.GetBytes(number);
if (BitConverter.IsLittleEndian)
Tagged with:

C# Decode URL

There are several ways you can decode URLs in C#. It all depends on what framework you are using.

.NET Core

If you are using .NET Core (either ASP .NET, a Class library or a Console App) or even .NET Standard, you can use one of these two methods:-




ASP .NET Framework

If you are inside an instance method of System.Web.Mvc.Controller, you can use the Server property as follows:-

For any other class, you can use one of these:-



.NET Framework Console/Desktop Application

For the remaining portions of the .NET platform like a .NET Framework Console Application, Class Library, Windows Service & Desktop Application you can use the following:-

On a side note, you can try my online URL Decoder.

Click here to read my blog post on how to encode URL in C#.

Code Signing Certificates – Why/When to Use

What are code signing certificates?

Code Signing Certificates are used to digitally sign binaries (Executables and DLLs).

Why to sign binaries?

Signing the binaries ensure that the files are from a trusted source (you/your company) and that they have not been tampered by someone else.

What to sign?

You should use them if you are distributing binaries that you built to a customer. These include the executable (EXE) for your application and any libraries (DLLs) you built to modularize the application. Chances are, that you are also creating a windows installer to package your application. You should sign it too.

What do I need for signing?

  • Your binaries
  • A code signing certificate
  • A code signing tool

Timestamps Quirks

Just like an SSL certificate, your code signing certificate has a validity period. If you forget to renew your SSL certificate, browsers will not allow the users to get into your site. Well, unless they are really desperate and bypass the protection. In which case, I would like to know what content you have up there! Thankfully, the certificate providers keep spamming you about the impending expiry and you get a new certificate, put it on your server and every thing is hunky-dory.

The same approach however will not work for code signing. You do not put the certificate/private key with your application. You use it to sign the binaries and embed that information in the binaries itself (it actually modifies your binary). Your certificate might expire 2 years from now, but the binaries must keep on working beyond that. To do this, we utilize a Timestamp server from a trusted authority. This is done during the code signing phase. The signing tool hits the timestamp server and embeds the information in your binary. Congratulations, your application is now Omnitemporal! Operating systems will never warn the user that the code signing certificate has expired, even after the actual expiry date of the code signing certificate has long gone.

Creating and Applying SSL Certificates (Complete Procedure)

I am writing this post in hope that it will help others by saving hours of research in trying to generate and use SSL certificates.

These steps outline the complete activity needed for generating SSL certificates for a web server in Java. For clarity, I have also included the steps that are done by the Certificate Authority. As such, you can follow these steps exactly and experience the role each of them plays.

Activity 1: Creating the Root CA

Step 1: Create a Private Key for the Root CA

openssl genrsa -out ca.key 4096

Step 2: Create the Self-Signed Certificate of the Root CA

openssl req -sha256 -new -x509 -days 1826 -key ca.key -out ca.crt

Root CA Certificates are always self signed. These certificates are valid for a long period, like 20-25 years.

Notice the -sha256 option, it forces the certificate to use the now mandatory SHA-2 instead of the unsecure SHA-1 algorithm which is blocked by browsers

Activity 2: Creating the Intermediate Certificate

Step 1: Create a Private Key for the Intermediate CA

openssl genrsa -out ia.key 4096

Step 2: Generate a Certificate Signing Request (CSR) for the Intermediate CA Certificate

openssl req -new -key ia.key -out ia.csr

Unlike the Root CA Certificate, this one cannot be self signed. It will be signed by the Root CA. The CSR created during this step is used by the Root CA to generate this certificate.

Step 3: Create an extension file for the Intermediate CA Certificate

echo "basicConstraints=CA:TRUE" > ia.ext

This step is needed to give the Intermediate CA the authority to generate certificates for others. If this step is omitted along with the -extfile option in the next step, the browser will display a warning saying that the Intermediate CA is not authorized to generate certificates.

This rule does not apply to the Root CA certificates. It is needed to differentiate between End Entity and Intermediate CA certificates. Without this security in place, all end entity certificates (like for your server) would have been able to generate certificates for others while inheriting the trust from the Root CA.

Step 4: Create the Intermediate CA Certificate signed by the Root CA

openssl x509 -req -sha256 -days 730 -in ia.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out ia.crt -extfile ia.ext

Notice the -extfile option which passes the extensions file from the previous step.

Activity 3: Creating the Server Certificate for your Web Server

Step 1: Create a Private Key for the Server

openssl genrsa -out 4096

Step 2: Generate a Certificate Signing Request (CSR) for the Server Certificate

openssl req -new -key -out

In this case, the Intermediate CA will generate the certificate for our server. So, a CSR needs to be generated and sent to the Intermediate CA.

Step 3: Create the Server Certificate signed by the Intermediate CA

openssl x509 -req -sha256 -days 730 -in -CA ia.crt -CAkey ia.key -set_serial 02 -out

We don’t need the -extfile option or the extensions file in this case. This is because this Server certificate is an End Entity certificate and it should not have the authority to generate certificates.

Activity 4: Using the certificates with JAVA

If you did not use keytool to create the private key and CSR (like in our case), there is a bit of a trick to using these certificates with JAVA. Keytool does not allow you to import the private key directly. This is for security reasons because the private key should never leave the server. You don’t need to provide your private key to the CA for it to generate a certificate for you.

Step 1: Chain Root and Intermediate Certificates

cat ia.crt ca.crt > bundledca.crt

This is very important because the client will only have the Root CA’s certificate in it’s certificate store. All intermediate certificates must be passed by the server to the client.

Step 2: Convert the certificate and private key to the Intermediate PKCS12 format

As keytool lacks (more like intentionally omitted) the ability to import the private key directly, we are going to use the PKCS12 intermediate format. This PKCS12 file will have the private key of our server along with the public keys of our server, the Intermediate CA and the Root CA chained together.

openssl pkcs12 -export -in -inkey -out -name testmaxotek -CAfile bundledca.crt -caname gidia -chain

Step 3: Convert the PKCS12 file to Java Keystore

Finally we convert the PKCS12 file to JAVA’s Keystore format which can then be used by a web server like Tomcat, JBoss.

keytool -importkeystore -deststorepass changeit -destkeypass changeit -destkeystore -srckeystore -srcstoretype PKCS12 -srcstorepass changeit -alias testmaxotek

The important thing here is to match the alias with the one used in the previous step.

Useful Commands

Convert Keystore to PKCS12

keytool -importkeystore -srckeystore test-142.keystore -destkeystore test-142.p12 -deststoretype PKCS12

Export the certificate

openssl pkcs12 -in test-142.p12 -nokeys -out test-142.pem

Export the private key

openssl pkcs12 -in test-142.p12  -nodes -nocerts -out test-142.key

Extracting data from Web Pages

Considering the volume of data available in the World Wide Web today, it is a no brainer that people will often need to extract data from it. This is the first part of a series of posts that will show you how to extract data from Web Pages using Data Utensil.

For those of you who are new here, Data Utensil is our product which aims to be “The single tool for all your data needs”. Now that is a big goal, but we are working towards it, albeit in tiny steps. As of writing this article, you can explore & manage databases, compare schemas & data, import & export tabular data from various formats and crawl websites using Data Utensil. I will stop my bantering there and get right on the topic.

Extracting data from web pages comprises of two main activities, crawling web pages & importing the data from HTML markup. Data Utensil splits these two activities in different Jobs. A job is a long running activity which runs in the background. Different types of jobs accomplish different things. For instance, the data comparison job compares data in tables of two schemas while the copy schema job copies data of tables in one schema to another.

The Crawl Website job crawls web pages and dumps the resultant HTML as files. Crawling starts with one or more URLs from where Data Utensil discovers new URLs to crawl, all the time dumping the HTML markup of the web pages. You can specify path filters to exclude URLs from being crawled or include path filters to restrict the crawling to only those. All the HTML dumps are saved in a folder of your choice. The dumps are stored in folders and files that try to mimic the path hierarchy of the URL. So, the following page: will be saved to C:\My Crawls\\products\data_utensil.html

After the completion of crawling you end up with a bunch of HTML files. These serve as the input to the next step, Import Table from HTML. There can be multiple types of tabular datasets in these files. To choose the correct one, you can select a file from the dump & see a preview of the tabular datasets in this file. You can easily switch to other tabular datasets contained in the file, to locate the correct one. The software understands the table through it’s XPath and column names. Data from all tables matching these two criteria will be appended to a new table. In addition to the columns in the table, you can specify virtual columns which can extract data from any HTML node/attribute from the dump file using XPaths. Virtual columns can also extract the name of the HTML file or any folder in it’s path.

After completing the configuration of columns & virtual columns, you can choose the schema & specify a name for the new database table which will be created. Finally, you specify a Name & Description for this Job.

When you run the job, it will use these configurations and start importing the data from HTML markup.

I am working on a new type of Job that can automatically extract multiple datasets from HTML dumps. The next article will focus on how this automation can save time, while accomplishing the same results.