Intelligence can be designed into a system using traditional AI methods such as expert systems, fuzzy logic or neural networks, but the most cost-effective and powerful implementation is through the use of distributed AI, where a community of intelligent agents decides on the optimal or near-optimal action through a process of negotiation.
Examples of such systems include intelligent machine tools, intelligent robots, intelligent geometry compressors, autonomous road vehicles, self-parking cars, pilotless aircraft and goal-seeking missiles. Autonomous mechatronic systems will be referred to in this paper also as Autonomous Mechatronics Systems or simply Intelligent Machines.
A most interesting variety of intelligent systems is a network of mutually interconnected intelligent systems, or an Intelligent Mechatronic Network.
Intelligent mechatronic networks are capable of deciding on their own behavior by means of negotiation between constituent autonomous units (the network nodes). Each of constituent units is itself an intelligent mechatronic system. Even more impressive is their ability to improve their own performance by self-organization (changing relations between constituent components with a view to improving the overall network performance).
The most advanced intelligent networks pursue a continuous evolution (disconnecting and thus eliminating less useful constituent units and connecting new units perceived by the network to be beneficial for achieving current or future goals).
Fleets of spacecraft, colonies of intelligent agricultural machinery, intelligent manufacturing systems and swarms of intelligent parcels are examples of such networks. Self-organization and evolving networks will almost certainly dominate the next decade as the most sought after engineering systems.
2.2. Intelligence
There is no agreed definition of Intelligence. It is considered to be too complex a concept for a neat and precise definition. My view is that if we call a class of systems ‘‘intelligent’’, we should define in what way these systems differ from the rest.
I suggest that the following definition of intelligence is quite adequate for our purpose:
Intelligence is the capability of a system to achieve its goals under conditions of uncertainty.
Where, the uncertainty is caused by the occurrence of unpredictable internal
events, such as component failures, and/or external events, e.g., unforeseeable changes in the system environments.
To exhibit intelligent behavior a system must have access to the knowledge on the domain in which it operates, and to act upon this knowledge in response to, or in anticipation of, external inputs (rather than to passively react to input data in a preprogrammed manner). In most cases, to ‘‘act upon knowledge’’ means selecting a pattern of behavior, which takes advantage, or neutralises undesirable consequences, of unpredictable events. It is important to note that when an intelligent system meets a new problem it must find a solution by the trial-and-error method, just like human beings .
2.3. Distributed intelligence
The term Distributed Intelligence implies that the system has many interconnected
decision-making units that share the responsibility for system behavior. Each unit may access the centrally stored knowledge and/or its own local knowledge, the latter arrangement usually improving the overall system performance. A distributed intelligent system is typically a network with decision-making unites as nodes and communication channels as links. The key feature of a distributed intelligent system is its Emergent Intelligence, that is, intelligence created through the
interaction of stakeholder units. Relatively simple units when connected into a complex network are capable of generating a rather superior intelligent behavior. Such systems are often compared to colonies of ants or to swarms of bees.