Session Keynote-1

Keynote 1

Conference
9:10 AM — 9:55 AM HKT
Local
Dec 1 Tue, 5:10 PM — 5:55 PM PST

Resource Allocation and Consensus in Edge Blockchain Systems

Yuanyuan Yang (SUNY Distinguished Professor, Stony Brook University, USA)

7
Edge devices with sensing, storage, and communication resources are penetrating our daily lives. Such edge devices can be used to conduct data transactions, e.g., micro-payments and micro-access control. The blockchain technology can be used to ensure the transactions unmodifiable and undeniable. In this talk, we present a blockchain system that adapts to edge devices. The new blockchain system can fairly and efficiently allocate storage resources on edge devices and achieve high scalability. We find the optimal peer nodes for transaction data storage in the blockchain, and provide the Recent Block Storage Allocation Scheme for quick retrieval of missing blocks. This blockchain system can also reach mining consensus with low energy consumption in edge devices with a new Proof of Stake mechanism. Extensive simulations show that the blockchain system works efficiently in edge environments. On average, the new system uses 15% less time and consumes 64% less battery power comparing with traditional blockchain systems.

Session Chair

Song Guo (The Hong Kong Polytechnic University)

Session Keynote-2

Keynote 2

Conference
9:55 AM — 10:40 AM HKT
Local
Dec 1 Tue, 5:55 PM — 6:40 PM PST

On Optimal Partitioning and Scheduling of DNNs in Mobile Cloud Computing

Jie Wu (Center for Networked Computing, Temple University, USA)

5
As Deep Neural Networks (DNNs) have been widely used in various applications, including computer vision on image segmentation and recognition, it is important to reduce the makespan of DNN computation, especially when running on mobile devices. Offloading is a viable solution that offloads computation from a slow mobile device to a fast, but remote server in cloud. As DNN computation consists of a multiple-stage processing pipeline, it is critical to decide on what stage should offloading occur to minimize the makespan. Our observations show that the local computation time on a mobile device follows a linear increasing function, while the offloading time on a mobile device is monotonic decreasing and follows a convex curve as more DNN layers are computed in the mobile device. Based on this observation, we first study the optimal partition and scheduling for one line-structure DNN. Then, we extend the result to multiple line-structure DNNs. Heuristic results for general-structure DNNs, represented by Directed Acyclic Graphs (DAGs), are also discussed based on a path-based scheduling policy. Our proposed solutions are validated via real system implementation.

Session Chair

Jiannong Cao (The Hong Kong Polytechnic University)

Made with in Toronto · Privacy Policy · © 2022 Duetone Corp.