Distributed and Decentralised Edge Computing @ UCLThe Internet world is turning upside-down. In today’s Internet, data is primarily flowing from data-centre servers and server farms, placed largely at the core of the network, towards the users at the edge of the network.
In tomorrow’s Internet, data will (primarily) be produced at the edge of the network from IoT devices, smart/autonomous vehicles, wearables, sensors and the like. This data will be of enormous volume. It has been said that each autonomous vehicle could generate tens of TBs of data per hour.
The current Internet infrastructure is not prepared to accommodate this volume of data coming in from the edge. The current model of sending everything back to the cloud for processing will simply not cope with this wave of data coming from the edge.
In order to get the Internet infrastructure prepared for this change, there are a number of components that need to be smoothly integrated into the current Internet architecture. At UCL, we are building solutions to address the needs of a future, privacy-preserving, IoT-dominated edge computing environment.
Below we list those necessary components, as well as the works that we have done at UCL over the past few years together with our great colleagues and collaborators. The majority of the solutions and opinions discussed below have been summarised in the following 2-page position paper.
ACM IoT Open Day@MobiSys 2018, Munich, Germany, June 2018.
Store-Process-Send at Edge Data Repositories
As a starting point, we have argued that the data communication pattern for edge-computing environments is a “store-process-send” pattern. This is very different to how the Internet is working today. The proposed pattern is building on the assumption that only a small proportion of the data produced at the edge is actually useful. Out of the tens of TBs per hour produced by an autonomous car, quite likely only 1/10th is actually useful information for the car manufacturer, the local council, the (future) car mechanic, or the users themselves.
According to the proposed “store-process-send” communication pattern, IoT-produced data is temporarily stored in edge access points; functions then move to the access point (instead of moving data to the function); data gets processed; and, finally, the produced result is sent to its final destination.
Mobile Edge Data Repositories
We have recently proposed “Mobile Edge Data Repositories” that operate under the “store, process and send” principle. These are edge storage and processing devices to temporarily store incoming data.
Think of this storage as a service that the (wireless) ISP is providing its users with, in addition to calls, text, download/upload data. Users have a virtual storage allowance in any access point they connect to.
In the following paper we lay out an initial version of the architecture to achieve this goal, based primarily on Information-Centric Networks.
“Mobile Data Repositories at the Edge”, HotEdge Workshop @ USENIX ATC, Boston, USA, July 2018.
Computation-Centric Network Architectures
Given the huge amounts of data produced at the edge, it has been largely accepted that it is cheaper to bring computation to data, than data to computation. That said, there will soon be a need for a computation-centric network architecture. It turns out that the current model of IP-address based content resolution cannot address “moving functions” that target stored data in edge data repositories, remain active for as long as computation lasts and then dissolve.
We have therefore proposed Named Function as a Service (NFaaS), Named Function Mobility (NFM) and Remote Method Invocation (RICE) to address architectural issues for distributed edge computing.
ACM ICN 2017, Berlin, Germany, September 2017.
Under submission, September 2018.
“RICE: Remote Method Invocation in ICN”, ACM ICN 2018, Boston, USA, September 2018.
**Best Paper Award**
“Open Security Issues for Edge Named Function Environments” IEEE Communications Magazine, 2018.
Resource Allocation in Edge-Cloudlets
A distributed network of computation spots (or cloudlets) cannot guarantee the elasticity of hugely over-provisioned data centres. In other words, today’s assumption of endless resources within data centres does not hold in an edge computing environment. When resources are scarce, resource allocation is becoming a necessary component of the edge-computing system.
Assuming that computation spots are spread along the ISP paths, the problem of what computation to host where resembles the caching issues that have been investigated in the past decades. We have therefore, played with existing caching and cache replacement algorithms (i.e., LRU, LFU and the like) to see if they can perform efficiently in case of dynamically instantiated named functions, rather than static content. The results are noteworthy: the performance of well-known replacement algorithms (as well as combinations of them) perform close to optimal.
“On Uncoordinated Service Placement in Edge Clouds” IEEE CloudCom 2017, Hong Kong, Dec 2017.
Market-Based Compute Ecosystem
Then, some entity needs to deploy edge computing spots or cloudlets and needs to monetise this infrastructure. We have built several auction-based models for resource allocation, based on demand and supply rules, but also taking into account user mobility. The resulting framework is the foundation of a “Market-Based Compute Ecosystem”, run by “In-Networking Computing Providers”.
The framework includes all the necessary components in order to allocate resources efficiently, on-demand and accommodate mobile users that connect to several Access Points and Base-Stations as they move.
"FogSpot: Spot Pricing for Application Provisioning in Edge/Fog Computing", IEEE Transactions on Services Computing (IEEE TSC), January 2019.
“Edge-MAP: Auction Markets for Edge Resource Provisioning”, IEEE WoWMoM 2018, Crete, Greece, June 2018.
“On-path Cloudlet Pricing for Low Latency Application Provisioning” IEEE LANMAN 2018, Washington DC, USA, June 2018 - (Invited Paper)
The Need for Decentralisation
Last but not least, a widely distributed computing infrastructure is difficult, if not impossible to
manage by a single entity. It is highly unlikely that the edge infrastructure of computation spots will be
deployed, run and managed by a single entity/company. Instead, it is more reasonable (and desirable) to assume that a multitude of
entities, such as local council, and local authorities together with ISPs, IoT companies and others, will invest to
deploy and operate this edge infrastructure.
There are a few reasons for that with the most important one being the so-called "War over Data". That is, IoT companies, the automotive industry and any user of the future edge computing infrastructure will (rightly so) do their best to keep user data for themselves. Growing numbers of privacy-preserving technologies, communities and initiatives are building platforms which enable users to keep their personal data for themselves and decide what to share with whom and for what price.
Privacy preserving distributed edge computing necessitates Trusted Execution Environments, such as Intel SGX, as well as improved versions of SGX. Such technologies together with blockchain/DLT and smart contract platforms can facilitate processing of data in a secure and privacy-preserving manner, without third parties being able to sniff on personal data, or alter transactions that have already taken place between any two parties.
In order to allow for processing of private data on trustless executing nodes and to facilitate a fair and secure customer-provider relationship, we have built Airtnt: a fair payment system for outsourced computation. The protocol was carefully designed to include all the necessary components to avoid any of the involved parties to cheat, cause others to lose their stake, or need a third party to verify computations and payments.
We believe Airtnt is a necessary component to provide support for a fundamental feature of the future Internet, that is: decentralisation.
“Airtnt: Fair Exchange Payment for Outsourced Secure Enclave Computations”, Under Submission, May 2018.
“Proof-of-Prestige: A Useful Work Reward System for Unverifiable Tasks”, 1st IEEE International Conference on Blockchains and Cryptocurrencies (IEEE ICBC), May 2019.
NDSS'17 DISS Workshop, San Diego, USA, February 2017.
The security measures of today's Internet (SSL, TLS, encryption of any sort or whatever similar) have not managed to prevent Cambridge Analytica from messing about with users' private data and diverting users' opinions to a favourable result. Facebook and similar (purely application-layer) platforms are intrinsically unable to deal with the situation and will therefore never do.
Identity verification and access control to private user data can be automatically granted (or denied) if specific rules, programmed in smart contracts (and recorded on the blockchain), are met. The community needs to not only re-define user-privacy, but also develop new software structures to grant access to sensitive personal data if the rules programmed in the smart contract are met. Most importantly these software structures need to support access right revocation if specific conditions change over time (e.g., if personal data get used in undesired ways).
ACM MobiArch 2016, New York, USA, October 2016 - Best Paper Award
Elsevier Computer Communications, In Press, Sept 2018 - (Invited Paper)
Get in touch
Dr Ioannis Psaras
Email: i.psaras (at) ucl.ac.uk
Team & Contributors
Dr Onur Ascigil
Dr Michal Krol
Dr Sergi Rene
Dr Argyrios Tasiopoulos
Mr Mustafa Al-Bassam
Mr Alberto Sonino
Prof. George Pavlou
Dr Dirk Kutscher, Huawei, Germany
Dave Oran, Network Systems Research & Design, USA
Prof. Lixia Zhang, UCLA, USA
Dr Mayutan Arumaithurai, University of Goettingen, Germany
Prof. Vasilis Tsaousidis and the team at DUTH
... many more ...
Our excellent colleagues in the EU FP7/JP NICT GreenICN Project, H2020 UMOBILE Project, EU H2020/JP NICT ICN2020 Project