<iframe width="560" height="315" src="https://www.uttv.ee/embed?id=31738" frameborder="0" allowfullscreen></iframe>
Over the past fifteen years, big data technologies played a crucial role in addressing challenging characteristics of data such as volume, variety and velocity. Such technologies were designed to work on computing clusters of (commodity) hardware. Yet, such clusters were centrally managed, assumed virtually infinite (rich) resources, and the data had to be moved to the cluster to get processed. This computing paradigm shift was further supported by the growing adoption of the cloud computing model. As of 2015, with the growing applications of Internet of Things, such as Smart-X applications, remote healthcare, etc., the data velocity and the stringent processing latency requirements have challenged big data technologies in directions beyond the principles these systems were designed for. Here, the terms fog computing and edge computing have emerged to further extend the notion of cloud computing. In essence, we need to process the data closer to where it is generated and avoid long trips to the cloud. Even more, this is not just due to business requirements but also to regulatory ones, e.g. GDPR. Processing data on the fog/edge has to respect the limited computing resources on these devices, data privacy, and the unstable nature of such computing networks. In his lecture, Professor Awad will discuss the challenges of conducting big data analytics across the hierarchy of the computing network (cloud/fog/edge). The lecture will cover the new challenges brought by extending the scope of processing and present a vision towards architectures to be realised over the next decade.