If you are interested in the upcoming or future seminars, please contact us for more information.
Recent Advances in Hardware Acceleration
Speaker: Prof. Wayne LUK, Professor Computer Engineering, Imperial College London, UK
Date/Time: Nov 25th 2024 (Mon) at 16:00HKT;
Abstract: This talk presents advances in hardware acceleration of demanding workloads, especially those involving machine learning. It begins with an overview of recent progress in hardware acceleration, covering applications and tools. An approach is then described for speeding up constraint-based causal discovery by shifting performance bottlenecks. Finally, some thoughts about future directions of design automation for hardware acceleration will be provided.
Biography: Wayne Luk is Professor of Computer Engineering with Imperial College and the Director of the HiPEDS Centre, focusing on high performance embedded and distributed systems. He was a Visiting Professor with Stanford University. His research focuses on theory and practice of customizing hardware and software designs. He is a Fellow of the Royal Academy of Engineering, the IEEE, and the BCS.
Emerging Computing Systems – Circuits, Architectures, and Smart Healthcare Applications
Speaker: Prof. Nima TaheriNejad, Professor, Heidelberg University, Germany
Date/Time: Nov 21st 2024 (Thur) at 15:00 HKT
Abstract: Traditional computing systems face major hurdles on their performance improvement path. At the same time, demand for better performance keeps increasing and diversifying its profile due to emerging applications, such as smart healthcare. The problems is compounded considering the environmental and energy crisis we face, where computing systems play a larder role everyday. In this environment, we are in urgent need of novel computing solutions, from basic paradigms to end applications. In this talk, we will review some of our work to address these challenges. In particular, we talk about how to improve the performance of fundamental computing solutions using approximate computing and in-memory computing using memristors as an emerging memory technology. We also take a look into improving smart health applications, especially wearable monitoring, and improving their performance and efficiency.
Biography: Nima Taherinejad received his Ph.D. degree in electrical and computer engineering from The University of British Columbia (UBC), Vancouver, Canada, in 2015. He is currently a full professor at Heidelberg University, Heidelberg, Germany. His areas of work include computer architecture (especially memory-centric and approximate computing), cyber-physical and embedded systems, memristor-based circuit and systems, and smart health-care. He has published three books, five patents, and more than 100 articles. Prof. Taherinejad has served as an editor of many journals, an organizer and a chair of various conferences and workshops. He has received several awards and scholarships from universities, conferences, and competitions he has attended. This includes the Best University Booth award at DATE 2021, First prize in the 15th Digilent Design Contest (2019) and in the Open-Source Hardware Competition at Eurolab4HPC (2019) as well as Best Teacher and Best Course awards at TU Wien (2020). Since 2023, he has been listed among the world’s top 2% scientists in the Stanford-Elsevier report.
When Commodity Computing Tap Out, Custom Hardware Saves the Day: Unlocking High-Performance, Power-Efficient Solutions for Deep Learning and Neurotechnology
Speaker: Dr. Ameer M. S. Abdelhadi, Assistant Professor, McMaster University, Hamilton, Canada
Date/Time: Nov 1st 2024 (Fri) at 9:00 HKT;
Abstract:
Commodity computing often fails to meet the stringent requirements of emerging compute- and memory-intensive applications, particularly those constrained by latency, form-factor, and energy—especially when deployed on portable and wearable devices. The success of such systems hinges upon real-time processing, energy efficiency, and adaptability to user needs. Existing solutions are often post-hoc, resulting in limited portability and energy efficiency. For such systems to become practical, highly optimized algorithms, innovative computing paradigms, and customized hardware are essential to enable portable, scalable, adaptable, real-time, and power-efficient processing. Two decades ago, the semiconductor technology faced the “power wall” due to the demise of Dennard scaling, which marked the end of the golden era where increasing frequency improved performance while reducing voltage maintained power consumption. Instead, massively parallel architectures emerged, driven by the continuity of Moore’s Law, which predicted the availability of densely packed, cheaper transistors. A prime example of this paradigm shift is the field of deep learning. Although the foundational concepts of deep learning have been known for decades, their practical application was impeded by the limited capabilities of commodity hardware. The introduction of general-purpose GPUs, with their massive spatially parallel processing capabilities, revolutionized deep learning by enabling the efficient training and inference of complex models. However, as deep learning algorithms become more sophisticated and demanding, and as workloads continue to expand, even the most advanced GPUs are reaching their limits. This necessitates the adoption of custom accelerators specifically designed for deep learning tasks, particularly for portable applications, where GPUs fail in energy efficiency. This talk will delve into two critical domains where custom acceleration is indispensable: deep learning and neurotechnology. We will overview these fields and their potential applications and focus on the integral role that custom computer architecture has to play for these areas to flourish. By examining case studies, we will explore how custom-tailored hardware solutions have unlocked unprecedented performance and energy efficiency, advancing the capabilities of both fields. In neurotechnology, after taking a mile-high view of a brain-machine interface system, we will then highlight the unique challenges and solutions that have emerged in creating hardware to meet the stringent requirements of neural data processing. In deep learning, we will discuss our current hardware-accelerated deep learning techniques for reducing computation, data traffic, and memory footprint. We conclude with a review of future research directions for custom-tailored computer architecture and its applications.
Biography:
Ameer M. S. Abdelhadi is an assistant professor of Computer Engineering in the Department of Electrical and Computer Engineering at McMaster University. He obtained his PhD in Computer Engineering from the University of British Columbia in 2016. Prior to joining McMaster, Dr. Abdelhadi held various academic positions as a research fellow and lecturer at the University of Toronto, Imperial College London, and Simon Fraser University. Before pursuing his graduate studies, he held multiple design and research positions in the semiconductor industry. Dr. Abdelhadi’s research interests span multiple areas, including application-specific custom-tailored computer architecture and hardware acceleration, hardware-efficient deep learning, neurotechnology, reconfigurable computing, and asynchronous circuits.
Towards Robust and Heterogeneous Federated Learning
Speaker: Prof. Edith C.H. Ngai, Associate Professor, Department of Electrical and Electronic Engineering, HKU
Date/Time: Oct 4th 2024 (Fri) at 10:00HKT;
Abstract: Edge Computing is the concept of capturing, storing, processing, and analyzing data closer to the location. Edge Intelligence is a combination of AI and edge computing, which enables the deployment of machine learning algorithms to the edge device where the data is generated. In this talk, I will give a brief introduction on federated learning and its benefits, major challenges, applications. Then, I will focus on system heterogeneity and robustness in federated learning. System heterogeneity aims to support edge devices with heterogeneous computation capabilities to collaborate in federated learning. It facilitates heterogeneous devices to perform federated learning with different local model architectures. After that, I will present our work on robust federated learning, which can improve the resilience of federated learning against data poisoning, model poisoning, and other kinds of security attacks.
Biography: Edith C.H. Ngai is currently an Associate Professor in the Department of Electrical and Electronic Engineering, The University of Hong Kong. Before joining HKU in 2020, she was an Associate Professor in the Department of Information Technology, Uppsala University, Sweden. Her research interests include Internet-of-Things, edge intelligence, smart cities, and smart health. She was a VINNMER Fellow (2009) awarded by Swedish Governmental Research Funding Agency VINNOVA. Her co-authored papers received a Best Paper Award in QShine 2023 and Best Paper Runner-Up Awards in ACM/IEEE IPSN 2013 and IEEE IWQoS 2010. She was an Area Editor of IEEE Internet of Things Journal from 2020 to 2022. She is currently an Associate Editor in IEEE Transactions of Mobile Computing, IEEE Network Magazine, IEEE Transactions of Industrial Informatics, Ad Hoc Networks, and Computer Networks. She served as a program chair in ACM womENcourage 2015 and a TPC co-chair in IEEE SmartCity 2015, IEEE ISSNIP 2015, IEEE GreenCom 2022, and IEEE/ACM IWQoS 2024. She was a project leader of the “Green IoT” in Sweden, which was named on IVA’s 100-list by the Royal Swedish Academy of Engineering Sciences in 2020. She received a Meta Policy Research Award in Asia Pacific in 2022. She was selected as one of the N²Women Stars in Computer Networking and Communications in 2022. She is a Distinguished Lecturer in IEEE Communication Society in 2023-2024. Department of Electrical and Electronic Engineering, HKU
Empowering the Future: Unveiling Next–Generation RISC–V Devices
Speaker: Yuning LIANG, Founder and CEO
Date/Time: Sep 5th 2024 (Thur) at 10:00HKT;
Abstract: The DC-ROMA RISC-V Laptop II is the world’s first RISC-V laptop pre-installed and powered by Ubuntu, Adding to a long list of firsts, the new DC-ROMA laptop II is the first to feature SpacemiT’s SoC K1 – with its 8-cores RISC-V CPU running at up to 2.0GHz with 16GB of memory. This significantly doubled its overall performance and energy efficiency over the previous generation’s 4-cores SoC running at 1.5GHz. Sporting a RISC-V StarFive JH7110 SoC, this groundbreaking Mainboard was independently designed and developed by DeepComputing. It’s the main component of the very first RISC-V laptop to run Canonical’s Ubuntu Desktop and Server, and the Fedora Desktop OS and represents the first independently developed Mainboard for a Framework Laptop.In conclusion, the series of product announcements by DeepComputing represents a leap forward in the realm of RISC-V technology, showcasing the convergence of innovation, collaboration, and vision. From the Modular laptop’s versatility to the Enterprise Router’s security features, each product announcement offers a glimpse into the future of RISC-V computing. As these innovations continue to evolve and reshape the technological landscape, one thing remains clear: the journey towards a more connected, secure, and efficient future has only just begun. With each unveiling, DeepComputing reaffirms its commitment to pushing the boundaries of what’s possible, inspiring others to join in the pursuit of RISC-V innovation and progress.
Biography: Yuning Liang is the Founder and CEO of DeepComputing, focusing on developing innovative technology products based on RISC-V SoMs. From the world’s first RISC- V development laptop DC-ROMA to pads, workstations, remote-controlled cars, drones, and more, all are based on RISC-V chips. The world’s first RISC-V laptop, the world’s first RISC-V pad capable of making phone calls, and so on, are all Yuning’s masterpieces. Yuning’s innovation and pioneering spirit in the RISC-V field have enabled him to create several world firsts, leading DeepComputing to gain widespread recognition in the global RISC-V product commercialization field, contributing significantly to the advancement and progress of RISC-V technology. Yuning’s career has taken him from the UK to Switzerland, then to South Korea, and finally to China. He has a strong practical background in embedded systems, platform APIs, and system software.
FPGA Architecture and Tool Research with Cross-Layer Optimization
Speaker: Prof. Yajun HA, Professor Shanghai Tech University, China
Date/Time: July 31th 2024 (Wed) at 16:00 HKT;
Abstract: Field Programmable Gate Array (FPGA) combines the programmability of a processor with the relatively high performance of an application-specific integrated circuit. t is an important energy efficient computing platform. in the research of FPGA, we noticed that on the one hand, the architecture and design automation tools of FPGA have a high degree of interaction with each other. 0n the other hand. They face the challenges and opportunity of semiconductor underling process devices and the application of new upper layer algorithms. This talk will introduce our cross layer optimization of FPGA architecture and tool research from several aspects such as energy-efficient FPGA, process deviation-sensitive FPGA, scalable FPGA, and new FPGA applications.
Biography: Yajun HA received the B.S. degree from Zhejiang University, Hangzhou, China, in 1996, the M.Eng. degree from the National University of Singapore, Singapore, in 1999, and the Ph.D. degree from Katholieke Universiteit Leuven, Leuven, Belgium, in 2004, all in electrical engineering. He is currently a Professor at ShanghaiTech University, China. He has been awarded several important Natural Science and Foundation China (NSFC) funding, including “The Research Fund for International Senior Scientist” and “Major International (Regional) Joint Research Project”. He served as the Editor-in-Chief for the IEEE Trans. on Circuits and Systems II: Express Briefs (2022-2023). Before joining ShanghaiTech University, he was a Director of the IR-BYD Joint Lab at Institute for Infocomm Research, Singapore, and an Adjunct Associate Professor at the Department of Electrical & Computer Engineering, National University of Singapore. Prior to this, he was an Assistant Professor with National University of Singapore. His research interests are focused on energy efficient circuits and systems, including reconfigurable computing, ultra-low power digital circuits and systems, embedded system architecture and design tools for applications in robots, smart vehicles and intelligent systems. He has published more than 150 internationally peer-reviewed journal/conference papers on these topics.
He is the recipient of several IEEE/ACM Best Paper Awards.
Jailbreak Large Language Models
Speaker: Prof. Tianwei ZHANG, Assistant Professor Nanyang Technological University, Singapore
Date/Time: June 25th 2024 (Tue) at 16:00 HKT;
Abstract: Large Language Models (LLMs) have emerged as transformative tools in the realm of artificial intelligence, powering a myriad of applications and fostering smooth human-machine interactions, particularly through chatbots like ChatGPT. However, the integration of these models introduces significant security risks. This talk focuses on one prominent security threat to LLMs, jaibreaking, which tries to deceive the models to output harmful content violating the usage policies. l will present three recent works on LLM jailbreaking. (1) A deep dive into the nature of jaibreak prompts categorizes them into unique patterns and tests their efficacy on models like GPT-3.5 and GPT-4. Our findings demonstrate the effectiveness of the jailbreak prompt and the defense power different models. (2) introducing Master key, a state-of the-art framework that not only deciphers the defensive mechanisms of popular LLM chatbots by exploiting time-based intricacies but also pioneers an automatic generation of jailbreak prompts. (3)Introducing Pandora, a comprehensive framework to jalbreak GPTs with a novel retrieval augmented generation poisoning technique. Together, these studies accentuate the pressing challenges and opportunities in securing LL-driven systems
Biography: Dr. Tianwei ZHANG is currently an assistant professor at College of Computing and Data Science, Nanyang Technologic University. He received his Bachelor’s degree at Peking University in 2011. and Ph.D degree at Princeton University in 2017. His research focuses on building efficient and trustworthy computer systems. He has been involved in the organization committee numerous technical conferences, including serving as the general chair of KSEM’22. He serves on the editorial board of lEEE Transactions on Circuits and Systems for Video Technology (TCSVT) since 2021, and receives the best associate editor award in2023. He has published more than 130 papers in top-tier Al, system and security conferences and journals. He has received several best paper awards including ASPLOS’23, ICDlS’22 and ISPA’21.
The Next Wave of HLS: Fully Automated PyTorch–to–Accelerator Design Flow
Speaker: Prof. Deming Chen, Abel Bliss Professor University of Illinois Urbana Champaign (UIUC), USA ACM TRETS, Editor-in-Chief, IEEE Fellow
Date/Time: June 7th 2024 (Fri) at 11:00 HKT;
Abstract: In this talk, we introduce a new design flow, ScaleHLS, that established a new High-Level Synthesis (HLS) solution translating AI models described in PyTorch to customized AI accelerators automatically. By adopting PyTorch as input for AI designs (instead of traditional C/C++ for HLS), the lines of code and design simulation time can be reduced by about 10× and 100×, respectively. Meanwhile, despite being fully automated and able to handle various applications, this new flow achieves a 1.29x higher throughput over DNNBuilder, a state-of-the-art RTL-based neural network accelerator on FPGAs. Such AI model-to-RTL flows pave the way for a new wave of HLS that could drive the high- productivity designs of AI circuits with high density, high-energy efficiency, low cost, and short design cycle. And such high-level model-to-RTL flows can be expanded to other non-AI domains. However, we are also facing existing and new challenges for such HLS solutions, such as ensuring the correctness of the high-level design, accommodating accurate low-level timing/energy information, handling the complexity of 3D circuits and/or chiplet-based design flows, and achieving all these in a highly scalable manner.
Biography: Deming Chen is the Abel Bliss Professor of the Grainger College of Engineering at University of Illinois at Urbana-Champaign (UIUC). His current research interests include reconfigurable and heterogenous computing, hybrid cloud, system-level design methodologies, machine learning and acceleration, and hardware security. He has published more than 280 research papers, received 10 Best Paper Awards and one ACM/SIGDA TCFPGA Hall-of-Fame Paper Award, and given more than 150 invited talks. His research has generated high impact, with open-sourced solutions adopted by both academia and industry (e.g., FCUDA, DNNBuilder, CSRNet, SkyNet, ScaleHLS). He is an IEEE Fellow, an ACM Distinguished Speaker, and the Editor-in-Chief of ACM Transactions on Reconfigurable Technology and Systems (TRETS). He is the Director of the AMD-Xilinx Center of Excellence and the Hybrid-Cloud Thrust Co-Lead of the IBM-Illinois Discovery Accelerator Institute at UIUC. He has been involved in several startup companies, such as AutoESL and Inspirit IoT. He received his Ph.D. from the Computer Science Department of UCLA in 2005.
Backdoors in Deep Learning: The Good, the Bad and the Ugly
Speaker: Prof. Yingjie LAO, Associate Professor, Tufts University, Boston, USA
Date/Time: May 24th 2024 (Fri) at 9:00 HKT
Abstract: Deep learning is revolutionizing almost all AI domains and has become the core of many modern AI systems. Despite their superior performance compared to classical methods, deep learning also faces new security problems, such as adversarial and backdoor attacks, that are hard to discover and resolve due to their black-box-like property. Backdoor attacks are possible because of insecure model pretraining and outsourcing practices. Malicious third parties can add backdoors into their models or poison their released data before delivering it to the victims to gain illegal benefits. This threat seriously damages the safety and trustworthiness of AI development. While most works consider backdoors “evil”, some research explores leveraging them for positive purposes. A notable approach involves using backdoors as watermarks to detect illegal uses of commercialized data/models. Watermarks can also be used for detecting AI-generated data, particularly with the rise of large generative models like LLMs. In this presentation, I will share insights from our recent research, exploring both the “good” and “bad” aspects of backdoors in deep learning.
Biography: Yingjie LAO is currently an associate professor in the Department of Electrical and Computer Engineering at Tufts University. He received his Ph.D. degree from the Department of Electrical and Computer Engineering at University of Minnesota, Twin Cities, in 2015. His research has been recognized with an NSF CAREER Award, an IEEE TVLSI Prize Paper Award, and an ISLPED Best Paper Award. His research interests include trusted AI, hardware security, electronic design automation, VLSI architectures for machine learning and emerging cryptographic systems, and AI for healthcare and biomedical applications.
GPU Acceleration for Word-Wise Homomorphic Encryption
Speaker: Dr. Hao YANG, PhD Scholar Nanjing University of Aeronautics and Astronautics
Date/Time: May 20th 2024 (Thur) at 10:00 HKT
Abstract: (Fully) Homomorphic Encryption (HE) is a promising privacy-enhancing cryptographic technique that allows computations to be performed on encrypted data without the need for decryption, and it has potentially wide applications across various industries. However, the primary challenge faced by HE is its low performance. The GPU, a powerful accelerator not only for Al tasks but also for cryptographic computations, is explored to accelerate HE in this context. This presentation first provides a brief overview of HE and its applications, then delves into the implementation details of HE. We explore several optimizations for both low-level arithmetic and high-level homomorphic operations on GPUs. Building on these foundations, we introduce an open-sourced GPU library specifically for word-wise HE schemes, named Phantom FHE. Finally, some potential directions for future research are discussed.
Biography: Hao YANG completed his bachelor’s and PhD degrees from Nanjing University of Aeronautics and Astronautics in 2019 and 2024respectively. His research focuses on lattice-based cryptography, fully homomorphic encryption, and GPU acceleration. He has published in journals including IEEE TIFS, IEEE TDSC, and lEEE TC. He has participated in more than 5 national projects from NSFC and MOST. He has also participated as a main contributor in projects funded by Ant Group and Huawei.
Sensor Location Optimization for Effective and Robust Beamforming
Speaker: Dr. Wei LIU, Reader, Queen Mary University of London, IEEE AESS Distinguished Lecturer
Date/Time: May 16th 2024 (Thur) at 16:00 HKT
Abstract: In many applications, the sensor array’s geometrical layout is assumed to be fixed and given in advance. However, it is possible to change the geometrical layout of the array including adjacent sensor spacing and these additional spatial degrees of freedom (DOFs) can be exploited to improve the performance in terms of either beamforming direction finding, or both. With the development of compressive sensing (CS) or the sparsity maximization framework, a new CS-based framework with a theoretically optimum solution (due to the convex nature of the formulation) has been developed for general sensor location optimization, with robustness against various array model errors considered too. In this talk, the CS-based framework for sensor location optimization will be presented for effective and robust beamforming, general introduction to both narrowband beamforming and broadband/wideband beamforming.
Biography: Dr. Wei LIU received his BSc in Space Physics (minor in Electronics) in 1996 and LLB in Intellectual Property Law in 1997 from Peking University, China, MPhil from the Department of Electrical and Electronic Engineering, University of Hong Kong, in 2001, PhD in 2003 from the School of Electronics and Computer Science, University of Southampton, U.K. Since September 2023, he has been a Reader at the School of Electronic Engineering and Computer Science, Queen Mary University of London. His research interests cover a wide range of topics in signal processing, with a focus on array signal processing (beamforming and source separation/extraction, the direction of arrival estimation, target tracking, and localization, etc.), and its various applications.
Revealing the Weakness of Addition Chain-based Masked SBox
Implementations
Speaker: Dr. Jingdian MING, Doctoral Researcher, Jiaxing Research Institute, Zhejiang University
Date/Time: May 14th 2024 (Tue) at 9:00 HKT
Abstract: Addition chain is a well-known approach for implementing higher-order masked SBoxes. However, this approach induces more computations of intermediate monomials, which in turn leaks more information related to the sensitive variables and may consequently decrease its side-channel resistance. Thus, we investigate the resilience of monomial computations with respect to side-channel analysis. We select several representative addition chain implementations, based on their theoretical resilience, that demonstrate the strongest and weakest resistance to side-channel analysis. In practical experiments based on an ARM Cortex-M4 architecture, we collect power and electromagnetic traces, considering different noise levels. The results reveal that the weakest masked SBox implementation exhibits a side-channel resistance nearly identical to an unprotected implementation. Moreover, we find that some monomials with smaller output size leak more sensitive information than the SBox output. This finding applies to various other masking schemes, including inner product masking.
Biography: Jingdian MING received the Ph.D. degree in 2022 from the School of Cyber Security, University of Chinese Academy of Sciences. He is currently an Associate Researcher at Jiaxing Research Institute, Zhejiang University. His main research interests include hardware security, cryptographic engineering, and side-channel analysis. Over the years, he has published multiple papers in hardware security, including TIFS, TCHES, and DATE.
Applications, Tools and Outlook for Reconfigurable Computing:
Selected Musings
Speaker: Prof. Andreas KOCH, Professor, Technische Universität Darmstadt
Date/Time: Apr 10th 2024 (Wed) at 16:00 HKT
Abstract: With recent improvements in silicon fabrication technology, reconfigurable devices can now be applied to accelerate functions beyond the traditional computing domains. In-network processing and smart computational storage are just two of these approaches. We discuss both simple and more complex application examples for both of these domains, covering a networkattached ML inference appliance, a JOIN accelerator for distributed databases, and also look forward to using a cache-coherent interconnect, such as CCIX or CXL, to tackle a complex database acceleration scenario linking a computational storage unit using near-data processing to a full-scale PostgreSQL database system. Beyond these hardware architectures, the talk also examines improvements in programming tools specialized for the realization of reconfigurable computing systems. Using the open-source TaPaSCo framework as an example, advanced features such as on-chip dynamic parallelism, flexibly customizable inter-processing element communications, and host/accelerator shared virtual memory with physical page migration capabilities are discussed.
Biography: Prof. Andreas KOCH is a full professor at TU Darmstadt in Germany, where he leads the Embedded Systems and Applications Group. He has been working on accelerated computing since the early 1990s, mainly using reconfigurable devices such as FPGAs and CGRAs, but also considering GPUs and specialized ML accelerators. He has always “played on both sides of the fence”, performing research not only on hardware architectures, but also on the software tools and libraries required to exploit them. Among others, he is currently applying his expertise to the domains of storage acceleration, near-data and in-network processing as well as ML inference for sum-product networks.
Efficient Programming on Heterogeneous Accelerators for
Sustainable Computing
Speaker: Prof. Peipei ZHOU, Assistant Professor
Department of Electrical and Computer Engineering, University of Pittsburgh
Date/Time: Mar 18th 2024 (Mon) at 9:00 HKT
Abstract: There is a growing call for increasingly agile computational power for edge and cloud infrastructure to serve the computationally complex needs of ubiquitous computing devices. One important challenge is addressing the holistic environmental impacts of these next-generation computing systems. A life-cycle view of sustainability for computing systems is necessary to reduce environmental impacts such as greenhouse gas emissions from these computing systems in different phases: manufacturing, operational, and disposal/recycling. My research investigates how to efficiently program and map widely used workloads on heterogeneous accelerators and seamlessly integrate them with existing computing systems towards sustainable computing. In this talk, I will first discuss how new mapping solutions, i.e., composing heterogeneous accelerators within system-on-chip with both FPGAs and AI tensor cores, achieve orders of magnitude energy efficiency gains when compared to monolithic accelerator mapping designs for various applications, including deep learning, security, and others. Then, I will apply such novel mapping solutions to show how design space explorations are performed when composing heterogeneous accelerators in latency-through tradeoff analysis. I will further discuss how such mapping and scheduling can be applied to other computing systems, such as GPUs, to improve energy efficiency and, therefore, reduce the operational carbon cost. Finally, I will introduce the REFRESH FPGA chiplets, explain why REFRESH chiplets help reduce the embodied carbon cost, and discuss the challenges and opportunities.
Biography: Prof. Peipei ZHOU is a tenure-track assistant professor in the Department of Electrical Computer Engineering at the University of Pittsburgh. She received her Ph.D. in Computer Science (2019) and M.S. in Electrical and Computer Engineering (2014) from UCLA, and her B.S. in Electrical and Computer Engineering (2012) from Southeast University. Her research investigates architecture, programming abstraction, and design automation tools for reconfigurable computing and heterogeneous computing. She has published 30 papers in IEEE/ACM computer system and design automation conferences and journals including FPGA, FCCM, DAC, ICCAD, ISPASS, TCAD, TODAES, TECS, IEEE Micro, etc. Her work has won the 2019 IEEE TCAD Donald O. Pederson Best Paper Award. Other awards include the 2023 ACM/IEEE IGSC Best Viewpoint Paper Finalist, the 2018 IEEE ISPASS Best Paper Nominee, and the 2018 IEEE/ACM ICCAD Best Paper Nominee.
Fast Deep Learning for Scientific Applications with FPGAs
Speaker: Dr. Zhiqiang QUE, Research Associate Department of Computing, Imperial College London
Date/Time: Mar 13th 2024 (Wed) at 16:00 HKT
Abstract: In the domain of scientific research, particularly in fields like particle physics, the demand for rapid data acquisition and in- situ processing systems is critical. These systems rely on custom processing elements with very low latency and high data bandwidth, along with real-time control modules. Integrating real-time machine learning algorithms with these processes can enable advances in scientific discovery. A critical component of such integrations is the acceleration of deep learning inference using reconfigurable accelerators such as FPGAs, which enables sophisticated processing in real-time with superior accuracy. In this talk, I first describe the FPGA-based fast Graph Neural Networks (GNNs) tailored for particle physics applications, demonstrating our achievements in minimizing latency and maximizing throughput. I then present our new studies on automation of optimizations for Fast Deep Learning (FastDL) in scientific applications.
Biography: Dr. Zhiqiang QUE is a research associate in the Custom Computing Research Group in the Department of Computing at Imperial College London. His experience includes ARM CPU design at Marvell Semiconductor (2011-2015) and Low Latency FPGA systems at CFFEX (2015-2018). He earned his PhD under Prof. Wayne LUK at Imperial College in 2023, while completing his B.S. and M.S. at Shanghai Jiao Tong University in 2008 and 2011. His research focuses on computer architecture, embedded systems, high-performance computing, and design automation for hardware optimization.
Intelligent Digital Design and Implementation with Machine Learning in EDA
Speaker: Prof. Zhiyao XIE, Assistant Professor Department of Electronic and Computer Engineering Hong Kong University of Science and Technology
Date/Time: Feb 1st 2024 (Thu) at 14:00 HKT
Abstract: As the integrated circuit (IC) complexity keeps increasing, the chip design cost is skyrocketing. There is a compelling need for design efficiency improvement through new electronic design automation (EDA) techniques. In this talk, I will present multiple design automation techniques based on machine learning (ML) methods, whose major strength is to explore highly complex correlations based on prior circuit data. These techniques cover various chip-design objectives and design stages, including layout, netlist, register-transfer level (RTL), and micro-architectural level. I will focus on the different challenges in design objective prediction at different stages, and present our customized solutions. In addition, I will share our latest observations in design generation with large language models.
Biography: Prof. Zhiyao XIE is an Assistant Professor in the Department of Electronic and Computer Engineering (ECE) at Hong Kong University of Science and Technology. He received his Ph.D. in 2022 from Duke University. His research focuses on electronic design automation (EDA) and machine learning for VLSI design. Prof. XIE has received multiple prestigious awards, including the UGC Early Career Award 2023, ACM Outstanding Dissertation Award in EDA 2023, EDAA Outstanding Dissertation Award 2023, MICRO 2021 Best Paper Award, ASP-DAC 2023 Best Paper Award, ACM SIGDA SRF Best Poster Award 2022, etc. During his Ph.D. studies, Prof. XIE also worked as a research intern at Nvidia, Arm, Cadence, and Synopsys. Prof. Zhiyao XIE, Assistant Professor Department of Electronic and Computer Engineering Hong Kong University of Science and Technology
Software–Programmable Accelerator–Centric Systems
Speaker: Dr. Zhenman Fang, Assistant Professor, Computer Engineering
Simon Fraser University
Date/Time: Nov 15th 2023 (Wed) at 10:00 HKT
Abstract: With the end of CPU scaling due to dark silicon limitations, customizable hardware accelerators on FPGAs have gained increasing attention in modern datacenters and edge devices due to their low power, high performance and energy-efficiency. Evidenced by Microsoft’s FPGA deployment in its Bing search engine and Azure cloud, the public cloud access by Amazon and Alibaba, Intel’s US$16.7B acquisition of Altera, and AMD’s US$50B acquisition of Xilinx, FPGA-based customizable acceleration is considered one of the most promising approaches to sustain the ever-increasing performance and energy-efficiency demand of emerging application domains such as machine learning and big data analytics. In this talk, Dr. Fang will first explain how FPGA hardware accelerators achieve amazing improvements and give an overview of our research on software- programmable accelerator-centric systems [PIEEE 2019, TRETS 2021]. Following that, he will present a few successful case studies on software-defined hardware acceleration for machine learning and big data analytics, including 1) Caffeine for early-day CNN acceleration [TCAD 2019 best paper], 2) HeatViT for vision transformer pruning and acceleration [HPCA 2023], 3) CHIP-KNN for k-nearest neighbors acceleration [FPT 2020, TRETS 2023], and 4) SQL2FPGA for compiling Spark SQL onto FPGAs [FCCM 2023]
Biography: Dr. Zhenman Fang is a Tenure-Track Assistant Professor in School of Engineering Science, Computer Engineering Option, Simon Fraser University, Canada, where he founded and directs the HiAccel lab. His recent research focuses on customizable computing with specialized hardware acceleration, which aims to sustain the ever-increasing performance, energy-efficiency, and reliability demand of important application domains in post- Moore’s law era. It spans the entire computing stack, including emerging application characterization and acceleration (including machine learning, computational genomics, big data analytics, and high-performance computing), novel accelerator-rich and near-data computing architecture designs, and corresponding programming, runtime, and tool support. Dr. Fang has published over 50 papers in top conferences and journals and two US patents, including two best paper awards (TCAD 2019 Donald O. Pederson best paper award and MEMSYS 2017), two best paper nominees (HPCA 2017 and ISPASS 2018), and an invited paper from Proceedings of the IEEE 2019. His research has also been recognized with a NSERC (Natural Sciences and Engineering Research Council of Canada) Alliance Award (2020), a CFI JELF (Canada Foundation for Innovation John R. Evans Leaders Fund) Award (2019), a Xilinx University Program Award (2019), a Team Award from Xilinx Software and IP Group (2018), and a Postdoc Fellowship from UCLA Institute for Digital Research and Education (2016-2017). More details can be found in his personal website: https://www.sfu.ca/~zhenman/.
Design and Implementation of Lightweight Post–Quantum Cryptography: From Algorithmic Derivation to Architectural Innovation
Speaker: Dr. Jiafeng (Harvest) XIE, Assistant Professor Department of Electrical and Computer Engineering, Villanova University
Date/Time: Oct 18th 2023 (Wed) at 16:00 HKT
Abstract: Post-quantum cryptography (PQC) has recently drawn significant attention from various communities, along with the rapid advancement in building large-scale quantum computers. Apart from the National Institute of Standards and Technology (NIST) PQC standardization process targeting general-purpose algorithms, the research community is also looking for lightweight PQC for specific applications. In this talk, I will follow this trend to introduce the design and implementation of a promising lightweight PQC, the Ring-Binary-Learning-with-Errors (RBLWE)-based encryption scheme. Specifically, this talk stands from the hardware implementation perspective, covering algorithmic derivation and architectural innovation. A series of novel algorithms and architectures will be covered in this talk. I hope that this talk will attract more research on the lightweight PQC development and further possible standardization.
Biography: Dr. XIE is currently an Assistant Professor in the Department of Electrical and Computer Engineering, Villanova University. His research interests include cryptographic engineering, hardware security, post-quantum cryptography, and digital design for large-scale computing systems. Dr. Xie has served as technical committee member for many reputed conferences such as HOST, ICCAD, and DAC. He is also currently serving as Associate Editor for IEEE Transactions on VLSI Systems, Microelectronics Journal, and IEEE Access. He also will be serving as Associate Editor for IEEE Transactions on Circuits and Systems-II starting 2024. He received the IEEE Access Outstanding Associate Editor for the year of 2019. He also received the 2022 IEEE Philadelphia Section Merrill Buckley Jr. Student Project Award and the Best Paper Award from IEEE International Symposium on Hardware Oriented Security and Trust 2019 (HOST’19).
6G, Metaverse, and Generative AI: From Convergence to Emergence
Speaker: Prof. Martin Maier, Full Professor, Institut National de la Recherche Scientifique
Date/Time: Oct 12th (Thursday) at 16:00 HKT
Abstract: 6G networks will bring forth a variety of novel enabling technologies such as
integrated sensing and communications for perceptive mobile networks,
quantum-enabled wireless networks, blockchainized mobile networks, and AI-native networks with intelligence-endogeneous capabilities. The push from more
advanced technological tools becoming available as well as the pull from
society’s needs imply that there must be several 6G paradigm shifts, e.g.,
transition from 2D to global 3D connectivity, services beyond communication,
and a cyber-physical continuum between the connected physical world of senses,
actions, and experiences and its programmable digital representations.
Importantly, NSF’s view on Next G research is that Next G includes but is not
limited to the specific key performance indicator requirements and topics of
interest addressed by the different 6G standards development organizations. In
fact, according to the Next G Alliance roadmap, there is a unique opportunity to
address the interdependencies between technological and human evolution,
given that there is a symbiotic relationship between technology and a
population’s societal and economic needs. As technology shapes human
behavior and lifestyles, those needs shape technological evolution.
This talk focuses on the fusion of digital and real worlds. We introduce the concept of the so-called Multiverse as an interesting attempt to help realize the fusion of digital and real worlds. The Multiverse offers eight different types of reality, including but not limited to virtual and augmented reality. A term closely related to the Multiverse is the recently emerging Metaverse. The Metaverse might be viewed as the next step after the
Internet, similar to how the mobile Internet expanded and enhanced the early Internet in the 1990s and 2000s. The various adventures that this place has to offer will surround us both socially and visually. The Metaverse will put the user first, allowing every member of our species to delve into new realms of possibilities. A modern, digital renaissance is taking place on the grandest stage we have ever seen, involving billions of connected
brains. In the coming decades, a new era of virtual life will bring in our next big milestone as a networked species.
Some argue that we are in the middle of making a historic pivot from adapting nature to our species to adapting our species back to nature. This pivot requires a wholesale rethinking of our worldview, shifting to a new scientific paradigm that views nature as a life source rather than resource and perceives the Earth as a complex self-organizing, and self-evolving system. While we know less about the ocean floor than we know about the surface of the moon, we know even less about the complex life that busies itself under our feet in the soil and cannot be seen with the naked eye. A handful of forest soil contains more life forms than there are people on the planet. The talk will end by providing an outlook on the convergence of digital evolution with biology, as
illustrated for the use case of Metaverse’s virtual society. We outline our ideas of the virtual society’s symbiosis of Inter(net) and (human) beings in the future Metaverse, giving rise to the powerful concept of Interbeing. We show that generative AI is instrumental in creating life-like digital organisms that produce clever solutions that AI researchers did not consider, had thought impossible, or even outwitting us humans
Biography: Martin Maier is a full professor with the Institut National de la Recherche Scientifique (INRS), Montréal, Canada. He was educated at the Technical University of Berlin, Germany, and received MSc and PhD degrees both with distinctions (summa cum laude) in 1998 and 2003, respectively. He was a recipient of the two-year Deutsche
Telekom doctoral scholarship from 1999 through 2001. He was a visiting researcher at the University of Southern California (USC), Los Angeles, CA, in 1998 and Arizona State University (ASU), Tempe, AZ, in 2001. In 2003, he was a postdoc fellow at the Massachusetts Institute of Technology (MIT), Cambridge, MA. Before joining INRS, Dr. Maier was a research associate at CTTC, Barcelona, Spain, 2003 through 2005. He was a
visiting professor at Stanford University, Stanford, CA, 2006 through 2007. He was a co-recipient of the 2009 IEEE Communications Society Best Tutorial Paper Award. Further, he was a Marie Curie IIF Fellow of the European Commission from 2014 through 2015. In 2017, he received the Friedrich Wilhelm Bessel Research Award from the Alexander von Humboldt (AvH) Foundation in recognition of his accomplishments in research
on FiWi-enhanced mobile networks. In 2017, he was named one of the three most promising scientists in the category “Contribution to a better society” of the Marie Skłodowska-Curie Actions (MSCA) 2017 Prize Award of the European Commission. In 2019/2020, he held a UC3M-Banco de Santander Excellence Chair at Universidad Carlos III de Madrid (UC3M), Madrid, Spain.
Solving Extreme-Scale Problems on Sunway Supercomputers
Speaker: Prof. Haohuan Fu, Professor, Tsinghua University, Deputy Director, National Supercomputing Center
Date/Time: May 22nd (Monday) at 16:00 HKT
Abstract: defined as the fastest computers in the world by the name, supercomputers have been important tools for making scientific discoveries and technological breakthroughs. In this talk, we will introduce a series of Sunway Supercomputers, which demonstrate a superb example of integrating tens of millions of cores into a specific scientific or engineering problem and bringing chances for widening our knowledge boundary. We would also provide examples on solving extreme-scale problems on Sunwaysupercomputers, in the domain of climate modeling, earthquake simulation, quantum simulation, understanding of satellite images. Through these examples, we discuss the key issues and important efforts required for bridging the computing power and the major challenges that we face.
Biography: Haohuan Fu is a Professor in the Department of Earth System Science, Tsinghua University, and the deputy director of the National Supercomputing Center in Wuxi. Fu has a Ph.D. in computing from Imperial College London. His research work focuses on supercomputing software, leading to three ACM Gordon Bell Prizes (non-hydrostatic atmospheric dynamic solver in 2016, nonlinear earthquake simulation in 2017, and random quantum circuit simulation in 2021).
ESSPER: Elastic and Scalable FPGA-Cluster System for High-Performance Reconfigurable Computing with Supercomputer Fugaku
Speaker: Prof. Kentaro Sano, RIKEN Center for Computational Science, Japan
Date/Time: April 3rd (Monday) at 16:00 HKT
Abstract: At RIKEN Center for Computational Science (R-CCS), we have been developing an experimental FPGA Cluster named “ESSPER (Elastic and Scalable System for high-PErformance Reconfigurable computing),” which is a research platform for reconfigurable HPC. ESSPER is composed of sixteen Intel Stratix 10 SX FPGAs which are connected to each other by a dedicated 100Gbps inter-FPGA network. We have developed our own Shell (SoC) and its software APIs for the FPGAs supporting inter-FPGA communication. The FPGA host servers are connected to a 100Gbps Infiniband switch, which allows distant servers to remotely access the FPGAs by using a software bridged Intel’s OPAE FPGA driver, called R-OPAE. By 100Gbps Infiniband network and R-OPAE, ESSPER is actually connected to the world’s fastest supercomputer, Fugaku, deployed in RIKEN, so that using Fugaku we can program bitstreams onto FPGAs remotely using R-OPAE, and off-load tasks to the FPGAs. In this talk, I introduce our ESSPER’s concept, system stack of hardware and software, programming environment, under-development applications as well as our future prospects for reconfigurable HPC.
Biography: Dr. Kentaro Sano is the team leader of the processor research team at RIKEN Center for Computational Science (R-CCS) since 2017, responsible for research and development of future high-performance processors and systems. He is also a visiting professor with an advanced computing system laboratory at Tohoku University. He received his Ph.D. from the graduate school of information sciences, Tohoku University, in 2000. From 2000 until 2018, he was a Research Associate and an Associate Professor at Tohoku University. He was a visiting researcher at the Department of Computing, Imperial College, London, and Maxeler Technology corporation in 2006 and 2007. His research interests include data-driven and spatial-parallel processor architectures such as a coarse-grain reconfigurable array (CGRA), FPGA-based high-performance reconfigurable computing, high-level synthesis compilers and tools for reconfigurable custom computing machines, and system architectures for next-generation supercomputing based on the data-flow computing model.
A 40-minute Introduction to Post-Quantum Cryptography
Speaker: Dr. David Jingwei Hu, Nanyang Technological University, Singapore
Date/Time: March 20th (Monday) at 16:00 HKT
Abstract: In this talk, I share my personal views on why post-quantum cryptography is important, then what post-quantum cryptography is, and finally, how we may conduct post-quantum cryptographic research. I will also share my study journey in Mainland, Hong Kong, and Singapore.
Biography: Jingwei Hu received his Ph.D. degree from the City University of Hong Kong in 2018. He is currently a postdoctoral research fellow at Nanyang Technological University, Singapore. He made contributions to several post-quantum cryptographic hardware designs, one side-channel analysis methodology for post-quantum cryptographic hardware, and one post-quantum algorithmic design. In 2022, the national institute of standards and technology released a status report on the Third Round of the NIST Post-Quantum Cryptography Standardization Process and stated that his work confirmed the performance of BIKE would be suitable for most applications. In 2021, he was nominated by Nanyang Technological University to participate in the 9th Global Young Scientists
Summit organized by National Research Foundation, Singapore. He won the Outstanding Academic Performance Award from the
City University of Hong Kong in 2017.
My Career Development as an Educator
Speaker: Dr. Matthew Tang, Queen Mary University of London, UK
Date/Time: Feb 13th (Monday) at 16:00 HKT
Abstract: In this talk, I am going to look back at my career development in academia for the past 15 years. I will first share my experiences in various roles of an instructor, a lecturer, a center director, and an engineer, that I had been through in CUHK and QMUL. In particular, I would like to contrast the difference in the teaching environments between China, Hong Kong, and the United Kingdom. Furthermore, I will also highlight my thoughts on building the skill sets that enable effective lecture delivery and programme development. Lastly, I would like to discuss recent changes in higher education and invite the audience to a dialogue about the education of our future.
Biography: Matthew Wai-Chung Tang finished his B.Eng. in Computer Engineering, MPhil, and PhD in Computer Science and Engineering from the Chinese University of Hong Kong (CUHK) in 2003, 2005, and 2008 respectively. He is now a Senior Lecturer (Teaching and scholarship) in Computer Engineering at the School of Electric Engineering and Computer Science (EECS), Queen Mary University of London (QMUL). He teaches primarily in the QMUL-BUPT Joint Programme (JP). Currently, he is the Director of the JP Innovation Centre and the Programme Lead of the Internet of Things Engineering Programme. He received the Teaching Excellence Award from BUPT-QMUL Joint Programme in 2017/18. He is a Chartered Engineer, a fellow of the Higher Education Academy (HEA), and a member of IET and IEEE. Matthew is interested in RISC-V architecture and implementation, reconfigurable computing, and design automation algorithms for Field Programming Gate Arrays (FPGA). He receives the Celoxica Best Paper Award in the 2007 IEEE Southern Conference on Programmable Logic (SPL’07) and the Best Presentation Award in the 2007 International Ph.D. Workshop on SoC (IPS’07).
Security challenges and opportunities in emerging device technologies
Speaker: Prof. Nele Mentens, Leiden University, The Netherlands & KU Leuven, Belgium
Date/Time: Jan 27th (Friday) at 16:00 HKT
Abstract: While traditional chips in bulk silicon technology are widely used for reliable and highly efficient systems, there are applications that call for devices in other technologies. On the one hand, novel device technologies need to be re-evaluated with respect to potential threats and attacks and how these can be faced with existing and novel security solutions and methods. On the other hand, emerging device technologies bring opportunities for building future security systems. This talk will give an overview of the minimal hardware resources that are needed to build secure systems and discusses the state-of-the-art in design of these hardware resources in emerging device technologies.
Biography: Nele Mentens is a professor at Leiden University in the Netherlands and KU Leuven in Belgium. Her research interests are in the field of configurable computing and hardware security. She was/is the PI in around 25 finished and ongoing research projects with national and international funding. She serves as a program committee member of renowned international security and hardware design conferences. She was the general co-chair of FPL’17, and she was/is the program chair of FPL’20, CARDIS’20, RAW’21, VLSID’22, DDECS’23, ASAP’23, and FPL’23. She is a (co-)author in around 150 publications in international journals, conferences, and books. She received best paper awards and nominations at CHES’19, AsianHOST’17, and DATE’16. Nele is an associate editor for IEEE TIFS, IEEE CAS Magazine, IEEE S&P, IEEE TCAD, ACM TRETS, and ACM TODAES. She also serves as an expert for the European Commission.
Edge Intelligence: Hardware Challenges and Opportunities
Speaker: Prof. Jose Nunez-Yanez, Linköping University, Sweden
Date/Time: Dec 2th (Friday) at 16:00 HKT
Abstract: In this talk we will initially discuss some basic edge computing concepts followed by the hardware challenges and opportunities of performing deep learning at the edge. We will then present the FADES (Fused Architecture for DEnse and Sparse tensor processing) heterogeneous architecture focusing on its application to Graph neural networks (GNN) acceleration. GNNs can deliver high accuracy when applied to non-Euclidean data in which data elements do not fit into a regular structure. They combine sparse and dense data characteristics and this, in turn, results in a combination of compute and bandwidth intensive requirements challenging to meet with general purpose hardware. FADES is a highly configurable architecture fully described with high-level synthesis integrated in TensorFlow Lite and Pytorch. It creates a dataflow of dataflows with multiple hardware treads and compute units that optimize data access and processing element utilization. This enables fine-grained stream hybrid processing of sparse and dense tensors suitable for multi-layer graph neural networks.
Biography: Prof. Nunez-Yanez is a professor in hardware architectures for Machine Learning at Linköping University with over 20 years of experience in the design of high-performance embedded hardware. He holds a PhD in hardware-based parallel data compression from the University of Loughborough, UK, with three patents awarded on the topic of high-speed parallel data compression. Previously to joining Linköping University he was a reader (associate professor) at Bristol University, UK. He spent a few years working in industry at ST Micro (Milan), ARM (Cambridge) and Sensata Systems (Swindon) with Marie Curie and Royal Society fellowships. His main area of expertise is in the design of hardware architectures and heterogenous systems for signal processing and
machine learning with a focus on run-time adaptation, high-performance via parallelism and energy-efficiency.
FPGA Technology: from Chips to Tools to Systems
Speaker: Prof. Dirk Koch, Heidelberg University
Date/Time: Nov 4th (Friday) at 16:00 HKT
Abstract: The Novel Computing Technologies (NCT) group at Heidelberg University works on mostly technology-focused aspects of reconfigurable computing. Our group maintains the open FABulous eFPGA framework which was used for the tape out of 10 chips so far. With that tool, we designed FPGA fabric clones of established Xilinx and Lattice FPGAs but also new fabrics that use memristor technology for configuration storage. Our GoAhead Partial reconfiguration tool allows the implementation of very adaptive FPGA systems where the FPGA resource utilization can be dynamically adjusted according to runtime requirements and operational conditions in a transparent manner, We use that to build a dynamic database processing system where accelerator modules are plugged together to processing pipelines for accelerating problems, that are only known at runtime. We also develop a security infrastructure that is required to operate FPGAs in data centers, and the talk will show how we crashed over 100 AWS F1 (FPGA) instances. More importantly, the talk will present the FPGADefender virus scanner that helps to prevent such attacks. Note that we vacant positions.
Biography: Dirk Koch is a Professor at the Heidelberg University. His main research interests are on run-time reconfigurable systems based 0n FPGAs. embedded systems, computer architecture, VLSI, and hardware security. Dirk developed techniques and tools for self-adaptive distributed embedded control systems based on FPGAs. Current research projects include database acceleration using FPGA-based stream processing, HPC, and exascale computing. as well as reconfigurable instruction set extensions for CPUs and using FPGAs in data centers. Dirk Koch is the author of the book Partial Reconfiguration on FPGAs’and a co-editor of the book “FPGAs for Software Programmers”
C.K.K.S. Bootstrapping
Speaker: Dr. Jingwei Hu
Date/Time: Oct 26th (Tuesday) at 10:00 AM Beijing Time (10:00 PM East Time)
Venue: Online Seminar (Zoom)
Language: English
Computing over encrypted data without bootstrapping
Speaker: Dr. Jingwei Hu
Date/Time: July 27th (Tuesday) at 9:00 PM Beijing Time (9:00 AM East Time)
Venue: Online Seminar (Zoom)
Language: English
Revisiting FHEW-like Homomorphic Encryptions
Speaker: Dr. Jingwei Hu
Date/Time: May 26th (Wednesday) at 10:30 AM Beijing Time (10:30 PM East Time)
Venue: Online Seminar (Zoom)
Language: English
Running Post-Quantum Cryptography on Real Hardware
Speaker: Dr. Wen Wang
Date/Time: March 31st (Wednesday) at 10:00 PM Beijing Time (10:00 AM East Time)
Venue: Online Seminar (Zoom)
Language: English
Coding Theory used in NIST Post-Quantum Cryptography Standardization (Part-II)
Speaker: Dr. Jingwei Hu
Date/Time: Oct 21st (Wednesday) at 9:00 PM Beijing Time (9:00 AM East Time)
Venue: Online Seminar (Zoom)
Language: English
Cryptos in Networks and Hardware Security
Speaker: Dr. Yao Liu
Date/Time: Oct 21st (Wednesday) at 9:00 PM Beijing Time (9:00 AM East Time)
Venue: Online Seminar (Zoom)
Language: English
Coding Theory used in NIST Post-Quantum Cryptography Standardization (Part-I)
Speaker: Dr. Jingwei Hu
Date/Time: Sept 9th (Wednesday) at 9:00 PM Beijing Time (9:00 AM East Time)
Venue: Online Seminar (Zoom)
Language: English
Sparse-Dense Polynomial Multiplication in Lightweight PQC
Speaker: Mr. Guangyan LI
Date/Time: Sept 9th (Wednesday) at 9:00 PM Beijing Time (9:00 AM East Time)
Venue: Online Seminar (Zoom)
Language: English
Implementation and Benchmarking of Round 2 Candidates in the NIST Post-Quantum Cryptography Standardization Process Using FPGAs
Speaker: Prof. Kris Gaj
Date/Time: Aug 19th (Tuesday) at 9:00 PM Beijing Time (9:00 AM East Time)
Venue: Online Seminar (Zoom)
Language: English
Slides: Link
Recent Progress in Fully Homomorphic Encryptions (Part-II)
Speaker: Dr. Jingwei Hu
Date/Time: July 14th (Tuesday) at 9:00 PM Beijing Time (9:00 AM East Time)
Venue: Online Seminar (Zoom)
Language: English
Reviews on Number Theoretic Transform
Speaker: Dr. Donglong Chen
Date/Time: June 29th (Monday) at 9:00 PM Beijing Time (9:00 AM East Time)
Venue: Online Seminar (Zoom)
Language: English
Multiplication Algorithms in Lightweight PQC – Binary Ring-LWE
Speaker: Mr. Guangyan LI
Date/Time: June 23rd (Monday) at 8:00 PM Beijing Time (8:00 AM East Time)
Venue: Online Seminar (Zoom)
Language: English
Recent Progress in Fully Homomorphic Encryption (Part-I)
Speaker: Dr. Jingwei Hu
Date/Time: May 26th (Sunday) at 8:00 PM Beijing Time (8:00 AM East Time)
Venue: Online Seminar (Zoom)
Language: English