Robotics, robots and technology: a simplified overview of a vast subject
In the past few years robotics has gone through astonishingly rapid development thanks to the overlap of various factors, including affordable high-powered computing, compact and mobile components, Big Data machine-learning, and low-cost 3-D manufacturing.
These technologies have led to a new wave of innovative robot designs. And where previously they would have been cumbersome, ineffective or dangerous, robots are increasingly utilized today in consumer and professional applications alike.
At the fundamental level, the development of current robotics-defined as the underlying technologies associated with robots-has less to do with the physical actuation and operation of devices. Instead, robotics development has more in common with computerized control and the development of machine autonomy. Both of these variables, in turn, are closely associated with increased machine perception via sensors, as well as with logical decision-making based on the recognition of patterns that is a hallmark of machine learning.
The difference, then, between many physical devices of similar form to current robotics lies mainly with two aspects. The first is that unlike other comparable devices, robots have the ability to provide feedback to their human controller, using sensors to create haptic-i.e., touch-or remote input. The second is that robots are able to process sensory data in order to make decisions on how to execute tasks, and in more advanced cases, be able to also define and elucidate the nature of the task to be performed.
Intelligence in robotics is a conceptually complex. Even the task of defining intelligence is a matter of much debate encompassing other abstracts, such as understanding, learning, reasoning, and meaning. A quick review of most dictionaries easily finds circular definitions, such as understanding being defined as perceiving something, while perception is defined as the ability to understand something.
The difficulty when applying these abstract definitions to machine code is that we are too aware of the processes involved, which results in an outcome that would otherwise look like reasoning or understanding. So it becomes very easy to understate or dismiss machine intelligence as distinct and distant from organic intelligence, simply because we already understand the details of the process. If a human codes for the behavior, it cannot be truly intelligent.
This perception is being challenged by self-learning structures, such as the neural network used for machine learning. In these cases the intelligence is not programmed but learnt, and the learning is then applied to other similar tasks, further inducing learning. This process of generating intelligence is analogous to the way that organic intelligence develops through trial-and-error and then repetition, ultimately leading to innovative decisions-that is, decisions that are not predisposed.
Machine learning and deep learning
Perhaps the crucial change in recent machine intelligence is the development of machine learning-more specifically deep learning.
Machine learning uses large volumes of data to recognise patterns based on similar experiences. This method often makes use of neural networks, where probabilistic outcomes are combined to then make assertions about what a particular piece of information represents. The power of machine learning is that it can be applied to any source data, which then opens up the possibility for the rapid acceleration of machine intelligence. The current stage for most artificial intelligence is pattern recognition from large volumes of text or visual data, such as the facial-recognition algorithms on Facebook, the contextual recommendations of Google Assistant, or the natural language processing of speech by Amazon's Alexa.
Recent machine learning systems use a process called deep learning, calling for algorithms to structure high-level abstractions in data via the processing of multiple layers of information. This occurs as machine learning tries to emulate the workings of a human brain.
Progress today in deep learning has been possible thanks to advanced algorithms, alongside the development of new and much faster hardware systems based on multiple graphics processing unit (GPU) cores instead of traditional central processing units (CPU). These new architectures allow faster learning phases as well as more accurate results.
Artificial intelligence is a broad and poorly defined ideal-the imaginary and moving boundary at which the stages of machine intelligence overreach our current expectations of their capabilities.
At the edge of currently accepted AI definitions are recent achievements in the field, such as the understanding of human speech, the ability to win at highly complex games like the ancient Asian game Go, the ability to recognise faces and objects, and the ability to analyse and spot patterns in huge volumes of data.
Yet each of these is premised on a method that is still relatively explainable. These achievements, while remarkable, also lack a higher order of intelligence demonstrable in abstract, nonmaterial qualities like creativity, understanding, or self-awareness. In all likelihood, it is only a matter of time before these triumphs are considered mere computer know-how and not really AI.
At that point, our expectations will then move even closer toward abstraction, which we currently find harder to define.
Tom Morrod is Research and Analysis Executive Director within the IHS Technology Group at IHS Markit
Posted 18 September 2017
- Recent takeaways on 5G: the IHS Markit perspective
- Huawei Mate 20-series smartphone features flexible OLED display from BOE and LG Display
- China’s ultrasound market: Huge growth potential in undeveloped areas
- October 2018 Market Insights – Technology
- Blockchain Implementation Infographic
- Q&A with Turner Sports: The new landscape in sports broadcasting
- Change will only continue to gather pace
- Apple adopts GF2 touch sensor structure on its new iPhones