To help personalise content, tailor your experience and help us improve our services, Bisinfotech.com uses cookies.
By navigating our site, you agree to allow us to use cookies, in accordance with our Privacy Policy.

AI’s Place in the IoT Infrastructure

  • Uma Pinagli – Business President, element14

AI has certainly come a long way from the time it was first coined in 1955 by John McCarthy – a cognitive scientist who used the term to describe machines that could reason like humans. AI is now about computers’ ever-increasing ability to search and recognise patterns on continuously growing stores of data. It also addresses two more factors; how an AI system can continuously learn from its incoming data and improve accordingly; and how it can act on its own conclusions without reference to humans.

AI systems and the IoT,  have now become part of our daily lives. However, if we are to offer increased autonomy to AI machines, engineers and designers will need to consider the potential impact that may have.

While offering enormous benefits in areas  such as healthcare and poverty alleviation,  AI is likely to  significantly impact  the very functioning of society, posing practical, ethical, legal and security challenges – much of which are not yet fully appreciated or understood and there are big decisions to be made as we capture the full potential of this technology.

Several factors have facilitated the advancement of AI. For instance, the creation of GPUs (graphic chips) which specialise in parallel processing – help AI to handle the millions of pixel calculations needed every second to satisfy the intensely visual and parallel demands of video games. Additionally,  AI systems can now school themselves from the vast repositories of data across the Net (big data). Not least is the data that organisations are now capturing and recording about their own operations, whether they involve machines on a production floor, or service personnel recording the results of their site visits.

The development of deep learning algorithms has given AI the ability to extract useful information from hordes of data, more efficiently. However the  code of deep learning alone is insufficient to generate complex logical thinking, but it is an essential component of all current AIs, including IBM’s Watson, Google’s search engine, and Facebook’s algorithms

The convergence of parallel computation, bigger data and deeper algorithms has contributed to the success of AI. Combined with the IoT, AI will exact an unprecedented influence on our world today. For example, within factories, AI and IoT work together to manage various industrial processes. In an industrial plant, data generated by machines on the factory floor can be used to generate simple alerts, such as a warning to check a motor if a temperature sensor reading exceeds a pre-set level. A more sophisticated IoT installation will have large numbers of sensors monitoring various operational aspects; temperature, vibration, current consumption and maybe more. This becomes a Big Data approach, in which all the data is sent to the cloud for a higher level of analysis, together with reporting of historical data and other trends. It also relates to Industry 4.0 architectures, which export factory machine control from local siloed systems, transferring instead to remote, centralised cloud-based facilities. As this model evolves and grows, the data processing and some other cloud functions are moved to the edge, as close to the data sources as possible. This can usher in a true AI solution. Based on real-time analysis of a machine’s performance, and an aggregated history of performance across large numbers of machines across the factory floor, or many factories, the AI system can learn the patterns that lead to failures.  This predictive maintenance system can forecast that in (say) three months a part will fail if not serviced, and recommend actions to pre-empt the failure. The AI system may also recommend ways to operate the machines to maximise their useful life, offering trade-offs between performance and longevity. Machine learning algorithms makes the analytics system smarter as time goes on and more data-sets and patterns are available. The AI system is only as good as the data it receives, so the more data on device operation, failure, and maintenance you feed into it, the more accurate the predictive analytics system becomes.

Conversely, without such AI capability, users simply would not be able to fully realise their IoT installation’s potential. The large numbers of sensors in the IoT infrastructure would generate data in volumes that would overwhelm human operators or technicians. They would need many hours to correlate and analyse data that could be handled by AI systems in or near real-time if they could do it at all. Not to be overlooked are the ways in which AI is influencing our lives in more direct, personal ways. We see it at work every day on shopping sites, supplying information into areas like ‘Customers who viewed this also viewed….’.

So where does AI go from here?  Most experts accept three categories of AI development:

  • ANI: Artificial Narrow Intelligence: This is perhaps the most basic AI. It is AI that specializes in one area only. For example, an AI that can beat the world chess champion in the game, but that’s the only thing it does.
  • AGI: Artificial General Intelligence: sometimes referred to as Strong AI or Human-level AI. AGI reaches and then passes the intelligence level of a human, meaning it has the ability to “reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience” as easily as a human.
  • ASI: Artificial Super Intelligence: Oxford philosopher and leading AI thinker Nick Bostrom defines superintelligence as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.” Artificial Superintelligence ranges from a computer that’s just a little smarter than a human to one that’s trillions of times smarter — across the board.

Currently, the lowest level – ANI – exists in many implementations everywhere; in cars, factories, cities, shops and homes, and on our smartphones. AGI development projects are in progress, while ASI remains in the future. One example of an AGI project is Google’s purchase of a UK company called DeepMind.  DeepMind is attempting to mimic the biological structure of the human brain with software, to build machines that can learn ‘organically’ – that is, without human involvement. DeepMind’s research includes systems that they claim are having a major environmental impact by learning how to use vastly less energy in Google’s data centres. The company is also collaborating with clinicians in the National Health Service on delivering better care for conditions that affect millions of people worldwide.

The true extent of AIs potential to increase productivity is not yet defined but what we know is that when combined with IoT implementations there are opportunities for synergistic partnerships that deliver real benefits for companies and users, and the potential to drive significant change at every level of business and society.  The pace in which this is delivered, though, if it indeed is, is still very much in human hands.

Tags
Show More

Niloy Banerjee

A generic movie-buff, passionate and professional with print journalism, serving editorial verticals on Technical and B2B segments, crude rover and writer on business happenings, spare time playing physical and digital forms of games; a love with philosophy is perennial as trying to archive pebbles from the ocean of literature. Lastly, a connoisseur in making and eating palatable cuisines.

Related Articles