Human beings have been testing out gesture control for their devices for as far as I can remember. There has been a certain ease to gestures that we don’t get with conventional touch buttons or screen control methods. Gesture have been an integral part of the modern operating system. Both Apple and Android has been experimenting with gestures throughout their timeline, and the results have been progressive. However, that’s not it. Apart from smartphones, there are other technological devices who have used gesture control like gaming consoles which have gone a level up with creating games of that sort. However, lets go step by step in this gesture recognition progression over the years.
The Samsung Galaxy s3 was the first device to incorporate gestures that allowed for hands-off control of the device. The gestures used in that phone seemed revolutionary, at the time. Gestures allowed for swiping across screens, taking screenshots and selfies and a whole sorts of gestures. The world was quite excited about these gestures undoubtedly because it felt like an innovative way to use your phone. However, as the people started using the hands off gestures it showed little practicality in everyday use despite being helpful every once in a while. The negative aspects in this case was the bugging down of the operating system. With all the extra features in those Galaxy Phones, no matter which microprocessor was being used, it started to lag down after a while. However, Samsung was confident that it will be able to incorporate these gestures in later flagships slowly throughout its line-ups. But unfortunately, they realized that the cons of gesture recognition technology outweighed the pros and had to give up with the idea. During this stage of transformation, many Chinese and other minor smartphone companies adopted these gestures but to similar results, however it was one way to show off.
This feature was meant to last long, as later gaming consoles considered it to be a very intuitive way to control the console as gamers are usually sitting away from the tv screens and using gestures to control the interface without having to move to find the remote or juggling with the controllers, which are a big hassle when it comes to doing non-gaming tasks. Thereafter, it was Microsoft with its Xbox allowing for touchless controls. Integrating this in the gaming industry seemed a better option with later games coming out that used sticks to allow for gestures that were performed in game and gave an overall immersive experience in gaming. Both Sony and Xbox jumped onto that bandwagon and started introducing games that involved gestures, which became quite famous as the gamers found that it brought the entire gaming world inside their rooms and they were able to physically participate in the games.
Must Read: JOHN WICK 3: PARABELLUM ENDS ENDGAME SUPREMACY OVER BOX-OFFICE
After all this fuss, came Microsoft with its innovative technology of the Microsoft glasses which was primarily intended for executive purposes, in research and development. The glasses allowed for the use of gestures to control virtual and augmented reality. The implementation was difficult and it wasn’t something that everybody was going to buy, partly because of the price and partly because of the lack of any usage of those glasses in everyday lives, so that product was restricted to businesses.
Gesture recognition is classified as a type of touchless user interface (TUI). Unlike a touchscreen device, TUI devices are controlled without touch. A voice-controlled smart speaker like Google Home and Amazon Alexa are prime examples of TUIs. By speaking commands, you can control these devices, all without relying on touch. However, gesture recognition is also a type of TUI, as it’s also controlled without the use of touch. With that said, many devices that support gesture recognition also support touchscreen.
The inner workings of gestures require sensors that are constantly active reading user data, movement and to compare with the movements that would allow certain actions to be performed on the devices like unlocking the device, launching an app, changing the volume etc. The constant usage of sensors and the camera meant that they need to be open in the background, which will drain battery and when it comes to smartphones there is not a lot of juice one can store in the devices. In the case of smartphones, there will be a time when people will be sick of using the current method of device control and would want a more easier way to interact with their devices. Gestures seems to be on top of the list to replace touch control. However, for smartphones, it shouldn’t be that big of a problem.
The real usage of gesture control seems to be in the vehicles which are moving towards complete autonomy, that gives the entire control of the car to the computer allowing the driver to relax and create a social atmosphere in the car, gestures could be used for adjusting volume of the music playing in the car or to adjust heating arrangments within the car. Accessing the screens available in the car wont be required anymore if the automotive companies are able to integrate this technology properly in the car alongside autonomous vehicles to make the entire driving experience hands-free and make it a proper comfortable transportation device.
Talking about advantages of gesture control, also results in less wear and tear of devices that support this functionality. With less physical contact with the device, the device can stay put while the user can make commands from far away with no contact with the device.Gesture recognition also opens the doors to a whole new world of input possibilities. Instead of being limited to traditional forms of input, users can experiment with other, gesture-based forms of input. Some devices even allow users to set up their own gestures. The future of gesture based technology is still uncertain because of questions regarding the consistency and accuracy of those gestures and people would rather use touch controls currently.