Edge Computing and Machine Learning, a strategic alliance
By Alfonso Ibáñez and Aitor Landete
Nowadays, it is not unusual to talk about terms such as Artificial Intelligence or Machine Learning. Society, companies and governments are increasingly aware of techniques such as deep learning, semi-supervised learning, reinforcement learning or transfer learning, among others. However, they have not yet assimilated the many benefits of combining these techniques with other emerging technologies such as the Internet of Things (IoT), Quantum computing or Blockchain
IoT and Machine Learning are two of the most exciting disciplines in technology today, and they are having a profound impact on both businesses and individuals. There are already millions of small devices embedded in factories, cities, vehicles, phones and in our homes collecting the information needed to make smart decisions in areas such as industrial process optimisation, predictive maintenance in offices, people's mobility, energy management at home, and people's facial recognition, among others.
The approach of most of these applications is to detect information from the environment and transmit it to powerful remote servers via the Internet, where the intelligence and decision making resides. However, applications such as autonomous vehicles are very critical and require accurate real-time responses. These new performance requirements play a key role in decision-making, and the use of remote servers outside the autonomous vehicle is not appropriate. The main reasons concern the time taken to transfer data to external servers and the permanent need for internet connectivity to process the information
Edge Computing
A new computing paradigm is emerging to help alleviate some of the problems above. This approach brings data processing and storage closer to the devices that generate it, eliminating reliance on servers in the cloud or in data centres located thousands of miles away. Edge Computing is transforming the way in which data is processed, improving response times and solving connectivity, scalability and security problems inherent to remote servers.
The proliferation of IoT devices, the rise of Edge Computing and the advantages of cloud services are enabling the emergence of hybrid computing where the strengths of Edge and Cloud are maximised. This hybrid approach allows tasks to be performed in the optimal place to achieve the objective, whether on local devices, on cloud servers, or both. Depending on where the execution takes place, the hybrid architecture coordinates tasks between edge devices, edge servers and cloud servers:
- Edge devices: these are devices that generate data at the edge of the network and have connectivity (Bluetooth, LTE IoT, etc.). They are equipped with small processors to store and process information and even execute, in real time, certain analytical tasks, which can result in immediate actions by the device. Tasks requiring greater complexity are moved to more powerful servers at higher levels of the architecture. Some examples of edge devices are ATMs, smart cameras, smartphones, etc.
- Edge servers: these are servers that have the capacity to process some of the complex tasks sent from the lower devices in the architecture. These servers are in continuous communication with the edge devices and can function as a gateway to the cloud servers. Some examples are rack processors located in industrial operations rooms, offices, banks, etc.
- Cloud servers: these are servers that have a large storage and computing capacity to address all tasks that have not been completed so far. These systems allow the management of all system devices and numerous business applications, among many other services.
Edge Artificial Intelligence
Research in the field of Machine Learning has made it possible to develop novel algorithms in the context of the Internet of Things. Although the execution of these algorithms is associated with powerful cloud servers due to the computational requirements, the future of this discipline is linked to the use of analytical models within edge devices. These new algorithms must be able to run on devices with weak processors, limited memory and without the need for an Internet connection.
Bonsai and ProtoNN are both examples of new algorithms designed to run analytical models on edge devices. These algorithms are based on the philosophy of supervised learning and can solve problems, in real time, on very simple devices with few computing resources. An application of this type of algorithms is smart speakers. A trained model is integrated on these devices that is capable of analysing all the words detected and identifies, among all of them, which is the activation lever ("Alexa", "Hello Siri", "OK, Google” ...). Once the keyword is recognised, the system starts transmitting the audio data to a remote server to analyse the required action and proceed with the execution of this action.
Unlike previous algorithms, in which the training of models is performed on cloud servers, the Federated Learning approach arises to orchestrate the training of analytical models between the edge and the cloud. In this new approach, each of the system's edge devices is in charge of training an analytical model with the data it has stored locally. After this training phase, each device sends the metrics from its local model to the same cloud server, where all models are combined into a single master model. As new information is collected, the devices download the latest version of the master model, retrain the model with the new information and send the resulting model back to the central server. This approach does not require the information collected at the edge devices to be transferred to the cloud server for processing, as only the generated models are transferred.
All new algorithms proposed in the literature try to optimise three important metrics: latency, throughput and accuracy. Latency refers to the time needed to infer a data record, throughput is the number of inferences made per second and accuracy is the confidence level of the prediction result. In addition, the power consumption required by the device is another aspect to consider. In this context, Apple has recently acquired Xnor.ai, a start-up company that aims to drive the development of new efficient algorithms to conserve the most precious part of the device, its battery.
Edge adoption
Businesses are embracing new technologies to drive digital transformation and improve their performance. Although there is no reference guide for the integration of these technologies, many companies follow the same pattern for the implementation of edge-related projects:
- The first phase consists of the development of the most basic scenario. The sensors of the edge devices collect the information from the environment and send it to the cloud servers where it is analysed and the main alerts and metrics are reported through dashboards.
- The second scenario extends the functionality by adding an additional processing layer to the edge device. Before the information is sent to the cloud, the device performs a small analysis of the information and based on the detected values, can initiate various actions through Edge Computing.
- The most mature phase is the incorporation of edge analytics. In this case, the edge devices process the information and run the analytical models they have integrated to generate intelligent responses in real time. These results are also sent to cloud servers to be processed by other applications.
Another more novel approach associated with edge analytics applications consists of enriching the predictions generated by edge devices with new predictions provided by cloud servers. The challenge for the scientific community now is to develop new systems that dynamically decide when to invoke this additional intelligence from the cloud and how to optimise the predictions made with both approaches.