[PAST EVENT] Challenges in Designing Die-Stacked GPU Architectures

January 27, 2017
McGlothlin-Street Hall, Room 20
251 Jamestown Rd
Williamsburg, VA 23185Map this location


Recent advances in process technologies have enabled the design of Systems-on-Chips (SoCs) with multiple cores, IPs, memory die, and various other components. While this tight integration has major performance and power benefits, it also results in high design cost and complexity, lack of innovation due to high barrier to entry, and low chip yield. Recent developments in die-stacking technologies, specifically interposer-based 2.5D stacking, provide opportunities for reducing design complexity, allowing more players to enter the market, and most importantly, improving chip yields.

 In the first part of this talk, I will present an overview of the recent trends in the computing domain, mainly focusing on the importance of parallelism and energy efficiency. Next, I will describe the main challenges in designing multi-processor SoCs, and talk about important research areas that are crucial to improve the design flow, power, and performance of such systems. In the third part of the talk, I will give a brief background on die-stacking, and then explain some of the challenges in designing interposer-based GPU architectures. Finally, I will present a simulation methodology for evaluating the memory system for such large-scale systems.


Onur Kayiran is a Member of Technical Staff Design Engineer at AMD Research. He graduated from Penn State with a Ph.D. in Computer Science and Engineering in 2015. His research interests are broadly in the domain of computer architecture. Specifically, he is interested in die-stacking, GPU architectures, heterogeneous CPU+GPU systems, interconnection networks, and memory systems.


Xu Liu 757-221-7739