Best movies and songs of 2012

My blog was started for TES – Technology Entertainment and Science. But I did not do justice to entertainment since I have written only 1 article in that.
Let me compromise a bit – an article about my favorite movies and songs in 2012 – which I never want to miss. Some of them may have released before 2012, but I’ve heard them this year and thanks to my music freak cousin for sharing most of them
Also it is my personal opinion and I don’t discount other songs and movies at all

Try the songs and movies listed (if you haven’t done yet) – hope you will enjoy it!!


1) Deshay Basara – Moroccan Arabic – BGM in Dark Knight rises –Music by Hans Zimmer
Entire OST and BGM is rocking and this one is the best…gives you goosebumps for sure

2) Nenjukkulle – Tamil – from Kadal movie – sung by new singer Shakthisree Gopalan
Truly marvelous and the well presented in the MTV unplugged –


3) Allah hi raham – Hindi – MTV Coke Studio – sung by Shankar Mahadevan
This song was out in 2011, but I heard it along with Coke Studio 2 – truly remarkable singing – no words about Shankar – he is a genius who can sing Hindustani and Classical –


4) Madari – Hindi – MTV Coke Studio – sung by Vishal and Sonu Kakkar
Excellent presentation by Vishal and superb guitar –


5) Banjara – Hindi – MTV Coke Studio – Vijay Prakash and Nandini
After hearing this, you will understand Vijay is an unsung HERO – classical portion is amazing –


6) Yeh jo des – Hindi – MTV unplugged – sung by the maestro Rahman itself
The original was my favorite till Swades – he improvised it – mind blowing –


7) I believe – Hindi/English – The DEWARIST – sung by Agnee , Parikarma and Shilpa
Inspiring lyrics , nice blend of guitar and voice –


8) Sail – English – AWOLNATION
Heard this song along with Felix baumgartner’s space jump – song is amazing and well suited with the mission –


9) Gangnam Style – Korean – sung by PSY
Wow…this was a surprise of 2012, the song was a instant hit, millions of views – the song proves music will break all barriers – any age group can enjoy this and PSY taught us simple dance step too –


10) Subhan Allaha – Malayalam – Ustad Hotel – sung by Navin Iyer
Amazing song with Sufi and classical blend –


11) Pareshaan – Hindi – Ishaqzaade – sung by Shalmali Kholgade
Nice song with rock blend


12) MTV Coke Studio Season 2 – whole program is amazing –


14) Kyon – Hindi – Barfi – sung by Papon and Sunidhi
Another superb song from Papon – he deserves more songs



1) The Dark Knight Rises – English – directed Christopher Nolan
2nd best of triology – what a way to end an epic –


2) Gangs of Wasseypur – Hindi – directed by Anurag Kashyap
A masterpiece – Godfather’s indian version –


3) Paan Singh Tomar – Hindi – directed by Tigmanshu Dhulia
Class acting by Irfan Khan –


4) Barfi – Hindi – directed by Anurag Basu
Loved because of Ranbir’s acting and screenplay


5) Pizza – Tamil – directed by Karthik Subbaraj

Expect the unexpected!! – Different experience –


6) Ustad Hotel – Malayalam – directed by Anwar Rasheed

Superb movie – awesome background music and powerful performance by late Thilakan

There was also number of superb movies includng 22 Female Kottayam and Spirit..but this is special because of Thilakan’s performance


Last One – Zeitgeist 2012
2012 happenings – video compilation – really amazing

So, do you agree with my choice ?? Please post your opinion and comments…

Hope there will be good music and amazing movies in year 2013 … Happy New Year J


Mobile communication – infrared, Bluetooth and NFC

Yesterday I was trying to send some videos with 200 MB data to another mobile. Though it happened smoothly, one question came to my mind. I have used this technology every day, but did not try to understand about it yet.
It made me very happy too since I got my topic!!

Infrared, Bluetooth and now NFC – all these technologies had done its part in different period. I used to remember keeping 2 Nokia phones in the straight line to transfer an image. Now it’s just a tap away. Here are some details about each technology – one is outdated, one is widely used and we yet to know the potential of last one.

As the name indicates, the communication is established via infrared signals. This technology is used by most of the remote controls. This technology is device specific and there should be direct line of sight between (max 3 feet) transmitter and receiver. This communication is more secure since it can be intercepted by a device and it’s one-to-one.
I’m not going to more details about this technology since it is not used in mobile devices anymore

Compared to infrared, Bluetooth devices can work together, its omni-directional. Current devices can transfer in the range of 30 feet. Bluetooth uses radio frequency and hence transmission is possible through walls or other objects and hence it is widely used in computers, PDA, headsets, mobile handsets etc. These devices can communicate with each other irrespective of the manufacturer since it is using standard 2.4 GHz frequency (ISM – Industrial Scientific and Medical devices – band).


Bluetooth communication does not need user intervention and uses very little power: – we connect devices such as phone, car GPS, mobile, PC and control them smartly (it automatically switches to the appropriate devices – handsfree is an example, if you setup devices in car audio and home land phone, the call switched automatically to these devices …pretty cool rite?). Current devices can transfer data up to 3Mbps (2.0 devices).

As I said earlier, it works in RF and lot of other devices such as cordless phones, baby-monitors, garbage door openers too are using same ISM band. Bluetooth uses very weak signal of 1 milli-watt to avoid interference and hence limits the range to 30 feet. Also it can connect 8 devices simultaneously and uses spread-spectrum frequency hopping method to avoid interference with each other (too technical, so no details).

Like other wireless network, Bluetooth also faces security issues since hackers can try to grab your data or interfere with signals with same frequency. Bluetooth offers several security options to avoid this – allowing only trusted devices, making the devices non-discoverable, pairing etc. Similar to hacking there is bluejacking – sending a business card as text message, contact added in address book by mistake and the hacker gets the control of the address book and phone.
BT communication can be done with a computer and other devices using a USB-dongle (most laptops have added this in-built)


Near Field Communication a.k.a NFC is the current hot topic in the smartphone market and it takes communication to the next level. I don’t think it is a replacement for BT, but NFC is preferable in some places.
In simple words, BT is like a phone communication – sender and receiver should be available and NFC is like an e-mail – we can transfer some information to a device (called tags) and then this can be used in any other device

As the name (near field) indicates, maximum distance for communication is only 4 centimeters. NFC can interact with other device with a single wave and it’s faster compared to BT (needs only 1/10 seconds to establish).
It also reduces the security risk specified in case of BT and NFC needs minimal power compared to BT. The speed of NFC is only 424 kbit/s and it can be used in combination with BT to speed up the process.


NFC can make use of un-powered chip called as tag to communicate – that’s why I said, its like an e-mail. Many vendors are creating these tags and X-peria smart tags are one of the popular one. You can save your phone settings, or browser bookmarks to a tag and this can be restored in any other NFC powered devices. NFC is similar to RFID (radio frequency identification) technology.

Android and NFC
Android uses NFC technology as Android-beam and this was launched in Android 4.0 ICS. They have customized the technology and made data transfer a dream – just swipe or touch the devices. It also supports technology called Wi-Fi Direct, previously known as Wi-Fi P2P. As the name shows, your mobile device can do peer-to-peer networking and transfer data. I think this is the coolest feature of NFC (torrent in smartphone?)


NFC, Android beam, S-beam have so-many features and I have to include them in another article. We can expect more stuff in NFC since it’s in growing stage and Android 4.1 jelly bean is already started hitting the market. I pity I-phone users since they can’t try these stuffs in near future

The End
I guess you got an overall idea about some of the most important communication/transfer technologies we have used so far. If you like it, post your comments and keep reading!!! Thanks


Cloud computing vs. Server virtualization

I hope you have read my previous articles: Cloud computing – an insightServer virtualization – Basics and got idea about these 2 most happening topics in IT industry. Now let’s see the difference between these 2. 


1) Virtualization is the deployment of existing infrastructure to do more service through shared mechanism. Also it can be considered as a sub-set of cloud computing. Cloud computing is an approach to consolidate IT infrastructure across enterprises which can be deployed on shared service model. Virtualization is an abstraction for the hardware layer to run more virtual machines on single physical machine. 

2) Virtualization is a technical term; it’s for IT-Administrators and not governed. Cloud computing is Business; it’s for all, and it’s governed.

3) Virtualization focuses on leveraging the IT infrastructure in an optimal fashion to reduce operational costs. The software like XEN, VMWare allow different applications and OS to be consolidated on the same physical machine. Cloud computing is to deliver computing resources and software as on-demand pay-per usage basis. Companies opt to buy such services from service providers rather than hosting them in own data centers. 

4) Virtualization helps to make cloud computing a reality (both public and private cloud). Cloud model is strategic decision to of moving from traditional model “cook and eat” to service oriented model “eat from restaurant” which is more efficient, flexible and simple (pay for what we use).


5) Both technologies are developed to maximize the use of computing resources with minimal cost, though the approaches are different. Also both can save money in different ways. Virtualization may be used to provide cloud computing.
6) To implement virtualization, organizations need upfront cost since it involves purchase of servers and infrastructure. There will be large capital cost involved, but the money will be saved over a period. Cloud is the opposite of this. This means organizations can start using the resources with less capital cost. But as the applications becomes popular and uses more resources, it might become more expensive than setting up own virtual infrastructure. 
Implementation of cloud also depends upon credibility / good-will of the service provider (vendor) since the data will be residing in their servers. But in virtualization, the organization has more control in the data back-up, disaster recovery mechanism etc. History: Microsoft’s data center, which provides T-mobile’s sidekick service, was crashed in year 2011. The customer data was lost and it was huge blow to T-mobile reputation.
The End
In 3 articles, I was trying to focus more on the fundamentals or basics of 2 concepts. Now I gonna start my journey to the core of these subjects: – architecture, implementation, layers, infrastructure etc and blog more. 
Bye till then !!!

Virtualization – Basics

Virtualization – another term like Cloud computing, that we hear in recent past and I want to put down some interesting details about it. This can be considered as a scaled down version of cloud computing, lets see the details


Let’s take a scenario – I am running a small IT firm with 10 employees and I need to buy infrastructure for running the firm. I need to buy server, 10 desktops, software (all licensed), hire administrators to maintain servers etc. If I decide to hire more employees, I have to buy more desktops for their needs and spend more money and this becomes a cumbersome process for me. Instead of buying 10 desktops with high configurations, I thought of investing in one high end machine (lots of CPU, RAM and space) and create 10 virtual desktops for each employee. In this approach, it is very easy to allocate one more virtual environment as I hire more and more employees. Hope you understood the very basic concept of virtualization and its benefits. Now let’s take a deep dive into the topic (more details in server virtualization)
Why virtualization?
Due to mobile computing and other applications, the Servers need to be very powerful with CPU and multiple processors to run complex tasks. Network administrators dedicate each server to a specific application or task (DB, Web, and Network etc) since many of these tasks will not comply with others and it becomes a difficult task to manage them.
Though servers are dedicated for each application, they may not be utilized to its capability with regards to processing power and storage. Also servers will take lot of physical space, generates heat and data center becomes crowded with racks. Virtualization can address these issues since physical servers are converted to multiple virtual machines.  This is not a new concept since this approach was done in Super computer decades ago and in last few years this became feasible for servers.


There are different types of virtualization – Hardware, Desktop, Software, Memory, Storage, Data and Network. I am more interested in Hardware, Software and Desktop virtualization and will talk about it more.
Hardware (Server Virtualization) – Also known as platform virtualization refers to the creation of virtual machine that acts like a real computer with an operating system. Software executed on these virtual machines is separated from underlying hardware resources. A computer running in Microsoft Windows may host a virtual machine which looks like computer with Linux OS. There are 2 parties in this virtualization
Host Machine – Physical server in which the virtualization takes place; Software or firmware that creates a virtual machine on the host hardware is called Hypervisor / Virtual Machine Monitor
Guest Machine – Virtual machine and they behave like physical machines and does not need much processing power or storage.
This virtualization can be created in 3 ways
Full virtualization – Uses special kind of software called Hypervisor and this interacts directly with the physical server’s CPU and disk space. It serves as a platform for the virtual server’s OS and also keeps each virtual server completely independent and unaware of other virtual machines.  Also each virtual machine runs in its own OS and that means one can run in Windows and other in Linux.
Hypervisor monitors physical server’s resources and relays it to the appropriate virtual server as the virtual machine runs applications. It takes some resources to do this action and it might affect the overall performance of the server as the number of virtual machines and applications grow. This method is commonly opted by most of the organizations since servers can run on different OS and also more support is available.
Para-virtualization – This is similar to full virtualization, but the Guest is aware of other virtual machines. The same Hypervisor software is used, but does not need much resource to manage guest OS since each guest is already aware of demands of other OS placing on the physical server. Administrators can choose this method if the servers are running in different OS, but this technique is relatively new and only few companies offer the software.
OS Level virtualization – This is again similar to full virtualization. This does not need Hypervisor software, but the virtualization is part of the host OS which performs all functions (same way it is done in full virtualization). All guest servers must run on the same OS and each remains independent of others. Organizations can prefer this approach if all physical servers run on the same OS since it’s faster and more efficient than other methods.
·          Redundancy without purchasing additional hardware
·          Safety – if one server fails, another server running same application can replace it
·          Offer isolated, independent systems in which programmers can test new applications or OS.
·          Environment friendly – conserve space, storage and less heat will be generated by server.
·          Migration – With the help of Third party hardware and software, virtual servers can be moved from one network to other. 
·          Network administrators should study the network architecture and needs before attempting a solution
·          Virtualization cannot be implemented in servers which has heavy load (high demand of processing power) since the server’s CPU is divided among virtual servers. Server may become very slow also might crash as the demand of applications increase in virtual machines.
·          It is not a good idea to create too many virtual servers in one physical server since it will create overload. Also processing power and storage allocated to each server will be divided and virtual server may not satisfy the business need.
·          Migration is another big challenge since its possible to migrate one virtual to another if both use same manufacturer’s processor due to technology limitations.
 The End
More and more organizations are opted this technology in recent past. Virtualization products are offered by major companies like Microsoft (VMWare), IBM (Blade), and Symantec etc.   
In my next article, I am gonna write about Cloud computing vs. Virtualization

Cloud computing – an insight


IT infrastructure is changing according the fast-paced world’s needs. People in the world want to stay connected with Work / Family-Friends. The data needs to be available anywhere in different devices such as Computer, Tablets, and Smart-phones etc. Here I am talking about large data which may be images, videos, documents or reports. So where do we store these data? 

Cloud computing is a technology that uses the internet and central remote servers to maintain data and applications. This allows consumers and businesses to use applications without installation and access their personal files at any computer with internet access. This technology allows for much more efficient computing by centralizing storage, memory, processing and bandwidth. 

World-wide web is an example of cloud computing. We use G-mail, Yahoo etc for sending mails in we don’t install any software and access the service using web browser. Similarly cloud computing is delivery of computing as a service rather than a product and these don’t require end-user knowledge of the physical location and configuration of the system. Here’s the logical diagram of CC


In a cloud computing system, there’s a significant workload shift. Local computers no longer have to do all the heavy lifting when it comes to running applications. The network of computers that make up the cloud handles them instead. Hardware and software demands on the user’s side decrease. The only thing the user’s computer needs to be able to run is the cloud computing system’s interface software, which can be as simple as a Web browser, and the cloud’s network takes care of the rest. Cloud computing has 3 service models – Software-as-Service (SaaS), Platform-as-Service (PaaS), Infrastructure-as-Service (IaaS)

Cloud Computing Architecture
When talking about a cloud computing system, it’s helpful to divide it into two sections: the front end and the back end. They connect to each other through a network, usually the Internet. The front end is the side the computer user, or client, sees. The back end is the “cloud” section of the system.
The front end includes the client’s computer (or computer network) and the application required to access the cloud computing system. Not all cloud computing systems have the same user interface. Services like Web-based e-mail programs leverage existing Web browsers like Internet Explorer or Firefox. Other systems have unique applications that provide network access to clients.
On the back end of the system are the various computers, servers and data storage systems that create the “cloud” of computing services. In theory, a cloud computing system could include practically any computer program you can imagine, from data processing to video games. Usually, each application will have its own dedicated server.
A central server administers the system, monitoring traffic and client demands to ensure everything runs smoothly. It follows a set of rules called protocols and uses a special kind of software called middleware. Middleware allows networked computers to communicate with each other. Most of the time, servers don’t run at full capacity. That means there’s unused processing power going to waste. It’s possible to fool a physical server into thinking it’s actually multiple servers, each running with its own independent operating system. The technique is called server virtualization. By maximizing the output of individual servers, server virtualization reduces the need for more physical machines.
If a cloud computing company has a lot of clients, there’s likely to be a high demand for a lot of storage space. Some companies require hundreds of digital storage devices. Cloud computing systems need at least twice the number of storage devices it requires to keep all its clients’ information stored. That’s because these devices, like all computers, occasionally break down. A cloud computing system must make a copy of all its clients’ information and store it on other devices. The copies enable the central server to access backup machines to retrieve data that otherwise would be unreachable. Making copies of data as a backup is called redundancy.
Bandwidth and latency requirements vary depending on the particular cloud application. A latency of 50 ms or more, for example, might be tolerable for the emergent public desktop service. Ultralow latency, near 1 ms, is needed for some high-end services such as grid computing or synchronous backup. Ensuring that each application receives its necessary performance characteristics over required distances is a prerequisite to cloud success.
There are different protocols available which will be suitable for linking servers and storage in and among cloud centers – Fibre Channel over Ethernet (FCoE), InfiniBand, and 8-Gbps Fibre Channel. This topic is very advanced and will be covered in another post

The applications of cloud computing are practically limitless. With the right middleware, a cloud computing system could execute all the programs a normal computer could run.

Clients would be able to access their applications and data from anywhere at any time. They could access the cloud computing system using any computer linked to the Internet. Data wouldn’t be confined to a hard drive on one user’s computer or even a corporation’s internal network.

It could bring hardware costs down. Cloud computing systems would reduce the need for advanced hardware on the client side.

Corporations that rely on computers have to make sure they have the right software in place to achieve goals. The companies don’t have to buy a set of software or software licenses for every employee. Instead, the company could pay a metered fee to a cloud computing company

If the cloud computing system’s back end is a grid computing system, then the client could take advantage of the entire network’s processing power. Often, scientists and researchers work with calculations so complex that it would take years for individual computers to complete them. On a grid computing system, the client could send the calculation to the cloud for processing. The cloud system would tap into the processing power of all available computers on the back end, significantly speeding up the calculation.

The biggest concerns about cloud computing are security and privacy. The idea of handing over important data to another company worries some people. Corporate executives might hesitate to take advantage of a cloud computing system because they can’t keep their company’s information under lock and key.
Security – The companies that provides cloud computing should have reliable security measures in place. Otherwise the service would lose all its clients. 
Privacy – If a client can log in from any location to access data and applications, it’s possible the client’s privacy could be compromised. Cloud computing companies will need to find ways to protect client privacy. One way is to use authentication techniques such as user names and passwords. Another is to employ an authorization format — each user can access only the data and applications relevant to his or her job.
Real life examples
1) Email servers like Yahoo & GMail store your mails which may also have file attachments too. Some enterprises rent such Email services for internal use from other vendors to keep their capital cost low. (Email servers & software are a bit costly to buy & maintain)
2) You can store files to servers on the web so that you can access them any where. How it is stored/retrieved is not important. e.g. DropBox
3) You want a database to store information but don’t want the hassle to install one on your machine, you get that as well e.g. Microsoft Azure
4) You get CPU time too in cloud computing. So, if you were working for some complex problem & you temporarily require a server performing the computation for it, without requiring buy such high end servers, you get that too as part of cloud computing. e.g. Google Apps, Azure
Apple has also introduced its own cloud computing which basically caters to email service & file storage service.

Firefox OS – another Mobile OS

Here comes another OS for smartphones to compete IOS, Android, Windows 8 and Blackberry OS. Is this a game changer? Let’s see the details

There is something interesting about this OS though the news did not excite me being a hardcore Android fan and user. Firefox OS is a Linux based open source OS developed by Mozilla that can integrate HTML 5 applications directly with hardware using JavaScript. The OS and apps are fully built based on the powerful features of HTML 5.

The concept of the OS is similar to Google Chromebook, treating website as apps, runs without native code. Also similar to Chrome OS, Firefox is a web browser running top of Linux.

Now the question comes about HTML 5. Is it powerful enough to replace Java or C++ and build entire mobile eco system? There was news about Facebook founder Mark Zuckerberg’s comment about HTML 5, that it is not ready and Firefox made a big mistake by choosing HTML 5. HTML 5 developers took this challenge and created an app called Fastbook, which was extremely faster than IOS and Andropid versions of Facebook native app.

Have a look at the video – One thing which stopped me developing android app was Java and HTML 5 is a good start to create some since it’s extremely simple.



Different layers of Firefox OS are

Gonk: – Middleware, which contains Linux kernel + Software Libraries + Hardware abstraction Layer à Runs on top of mobile chipset and drivers
Gecko: – Application runtime, which implements HTML 5 + CSS + JavaScript
Gaia: – User interface which includes Home screen + Application launcher


Sony has shown the guts to experiment with Firefox OS and launched Experia E for developers. Also the 1st smartphone should be released by July 2013, first in South Africa, and later to Asian and European markets (good that they choose African country to avoid competing with latest and powerful OS versions – Android 4.2 and IOS 7). Also Mozilla is working with other manufacturers – Alcatel, LG, ZTE and Huawei



Security will be a main concern since there is no native code and hackers started targeting smartphones these days.

The End
So, is this a game changer? I don’t think so at this point of time. Even though HTML 5 is good, not sure if it can create all apps available in Android, IOS and Windows 8. Anyway, it’s good as a develop to create few apps and explore the smartphone development