Artificial Intelligence – A little less artificial, a lot more intelligent | | Resource Centre by Reliance Digital
Home > HOW-TOs > Computing Solutions > Artificial Intelligence – A little less artificial, a lot more intelligent

Artificial Intelligence – A little less artificial, a lot more intelligent




Share This Post

When we think of AI, most of us think of science-fiction movies where robots help us in preparing meals and taking care of household duties. We aren’t there, but we are already seeing early signs of automation and intelligence from devices we use regularly.

Today, when we ask Amazon’s Alexa to add items to our cart and Google Assistant to find the best Chinese restaurant around, we are simply interacting with AI. With the help of AI, websites are now able to predict what we might need in the future. There are also software that can identify objects around us and voice assistants like Siri and Cortana understand our voice commands.

AI is training our machines to exhibit human-like behaviour. Phones are able to identify users through facial recognition, answers calls for us and also set preferences based on our past usage. And as it does so, it also makes it interesting for us to understand how AI is becoming capable of doing that. Let’s try to better understand AI and the way it’s heading.

How does AI learning happen?

The key process to AI learning or machine learning are neural networks. Just as a human brain has different layers exchanging data, these networks are based on interconnected algorithms which share information with each other. When AI has to identify or learn something new, it passes the attribute of the data within these algorithms. During the training, weights attached to the information constantly change until the output from the neural network is very close to the desired output.

Think of identifying a cat. Obviously, as humans, we are familiar to attributes of a cat such as sharp ears, a small nose and a short tail. However, a computer cannot connect the attributes like humans. Whenever an AI has to identify an image, it primarily looks for the outer layers and then proceeds to find particular attributes and shapes. The gathered data is then passed and matched with information present within the neural networks until the final output is determined.

AI with images generally uses a conventional approach where the image attributes are directly uploaded to neural networks. But things work quite differently when we use voice assistants such as Siri, Cortana or Google Assistant. These software follow a recurrent neural approach, where voice commands are temporarily stored in the internal memory These voice commands are processed by neural networks are then matched with the stored commands to give the final output.

What are the leading neural networks?

With AI playing a major role in modern day technologies and services, every major tech firm is battling to develop a robust AI network.  Major cloud platforms such as Amazon Web services, and Google Cloud are providing necessary infrastructure like cloud storages, vast amount of data for training devices and also tools for displaying results clearly.

Google has recently opened access to TensorFlow networks which lets developers explore a huge data library and AI development tools. Likewise, Amazon also has its own neural network called Amazon SageMaker which is widely used in Alexa powered devices.

There are also some open-source networks such as OpenAI which helps developers to access data and tools for AI development.

How are applications benefited with neural networks?

AI has been around for a long time and it was typically used in complex technologies such as airplanes autopilot systems and automation in factories. However, it became more accessible to users after 2015. This was largely possible due to vast interconnection of neural networks and sensors, constantly exchanging data.

Using Google’s TensorFlow network, applications such as Google Lens is now able to identify objects around you and can even re-direct to websites where you can buy them. Other applications like Google Photos scan user photo libraries to identify objects and even people. The next time you click a photo around similar objects or with a person, the app automatically tries to build albums with matching similarities.

AI has also lot to do with our online shopping experiences. Shopping websites constantly maps user’s searches, their buying patterns and tries to recommend products they might need. It also help these websites to analyse new trends and make online shopping a much better experience.

AI is also showing strong integration in the entertainment industry. When we browse or view videos on YouTube of a particular category, AI tries to analyse our browsing patterns and often floods our homepage with similar videos. Even video streaming platforms like Netflix builds libraries for users analysing the type of content they usually view. It even clubs search results of movies or shows with similar genre and storylines, rather than just keywords.  

How AI is getting better?

The biggest breakthrough for AI in recent years is through the availability of massive data and hardware. Before 2015, AI algorithms were processed by core CPUs which consumed a lot of time. However, after the introduction of AI co-processors or DSPs (Digital Signal Processors), things have changed drastically. These co-processors are much more efficient and radically reduces the AI computing latency.

An example of such a custom chip is the recently launched Qualcomm Snapdragon 855 with Hexagon 690 DSP. It is based on Google’s Tensor processing unit (TPU) which provides higher processing speeds. It will be exciting to see how the Snapdragon 855 equipped devices perform, since it is the only CPU to sport a dedicated tensor processor. Likewise, Apple has its A11 and A12 Bionic co-processors used in the latest iPhones (iPhone 8 and later). These 8-core processing units power applications such as Siri, Camera and even face-detection.

As the AI industry gets more competitive, these tech giants are constantly focusing on developing powerful hardware to make AI more efficient and seamless than ever before.

AI and its future

After the initial launch of Amazon’s Alexa voice assistant, many users had reported problems related to speech recognition. However, as the AI learning got better, it learned how to understand and respond in regional languages. With researchers constantly working on development of AI speaking capabilities, expect speaking to computers the norm alongside traditional forms of human-machine interaction. 

AI has also pushed the ability of facial-recognition software, to the point where Chinese tech giant Baidu says they can match faces with a 99 per cent accuracy.  In the near future, we can even expect AI to have a dramatic impact on healthcare, helping doctors to pick out tumours, aiding researchers to spot genetic sequenced related to diseases and more.

Our desires to have robots taking care of house hold duties might not become a reality this year, but we already have automated cars to drive for us. Though not fully operational, car makers like Tesla have successfully tested vehicles that monitor driver awareness, maintain speeds and road lanes. The recently demonstrated Google Duplex app at the Google I/O even answer calls and make appointments for us. With the ever increasing AI integration in devices and services, the possibilities of what AI can do in the future opens numerous doors of imagination.