BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Sabre//Sabre VObject 4.5.7//EN
CALSCALE:GREGORIAN
BEGIN:VEVENT
UID:sabre-vobject-532684ee-ec8c-4481-bca2-884094caab0a
DTSTAMP:20260414T022823Z
SUMMARY:Nyquist Lecture in Electrical Engineering: It All Depends: A Person
 al Perspective on the Evolution of Computer Architecture
DESCRIPTION:Abstract:\n\nOver my 50+ year career in computer architecture r
 esearch\, processor\nperformance has improved by over three orders of magn
 itude. Those improvements\nhave been premised on improvements at every lev
 el of a long-standing stack of\nabstractions. That stack is comprised of: 
 (1) algorithms expressed in\nprogramming languages that are (2) converted 
 by compilers into sequences of\ninstructions that are (3) executed on comp
 uter architectures that are (4)\nimplemented using transistors in the late
 st technology.\n\nIn this talk\, I am going focus on the computer architec
 ture level of the stack\,\nwhere execution of instructions occurs. At that
  level\, the intrinsic nature of\ninstructions has remained relatively con
 stant\, but the techniques to execute\nthem quickly have evolved significa
 ntly. At its heart\, however\, executing\ninstructions is a step-by-step p
 rocess where each step depends on information\ngenerated at prior steps. A
 nd therefore\, dealing with those dependencies is what\ndetermines the per
 formance of the architecture. So in the first part of this\ntalk\, I will 
 frame a number of the techniques that have been employed to\noptimize perf
 ormance as instances of a set of methods that can mitigate the\nimpact of 
 dependencies between (and within) instructions.\n\nAlthough the abstractio
 n stack and nature of instructions has remained\nremarkably constant over 
 more than the last 50 years\, that is now likely to\nchange. That is due t
 o the end of long-term technology scaling trends (usually\nreferred to as 
 Moore&#39\;s Law and Denard scaling) that is limiting the ability to\nkeep
  making improvements without changing stack or at least the interfaces\nbe
 tween levels. So in the second part of the talk\, I will discuss my\nphilo
 sophical perspective on the important attributes of the existing stack and
 \nprovide some thoughts on how to both preserve those attributes and impro
 ve\nperformance.\n\nBio:&nbsp\;\n&nbsp\; &nbsp\; &nbsp\; &nbsp\;\nJoel S. 
 Emer is a Professor of the Practice at MIT&#39\;s Electrical Engineering a
 nd\nComputer Science Department (EECS) and a member of the Computer Scienc
 e and\nArtificial Intelligence Laboratory (CSAIL). He is also a Senior Dis
 tinguished\nResearch Scientist at Nvidia in Westford\, MA\, where he is re
 sponsible for\nexploration of future architectures as well as modeling and
  analysis\nmethodologies. Prior to joining Nvidia\, he worked at Intel whe
 re he was an Intel\nFellow and Director of Microarchitecture Research. Pre
 viously he worked at\nCompaq and Digital Equipment Corporation (DEC).\n\nF
 or over 50 years\, Dr. Emer has held various research and advanced develop
 ment\npositions investigating processor micro-architecture and developing 
 performance\nmodeling and evaluation techniques. He has made architectural
  contributions to a\nnumber of VAX\, Alpha and X86 processors and is recog
 nized as one of the\ndevelopers of the widely employed quantitative approa
 ch to processor performance\nevaluation. He has also been recognized for h
 is contributions in the advancement\nof deep learning accelerator design\,
  spatial and parallel architectures\,\nprocessor reliability analysis\, me
 mory dependence prediction\, pipeline and cache\norganization\, performanc
 e modeling methodologies and simultaneous\nmultithreading. He earned a doc
 torate in electrical engineering from the\nUniversity of Illinois in 1979.
  He received a bachelor&#39\;s degree with highest\nhonors in electrical e
 ngineering in 1974\, and his master&#39\;s degree in 1975 --\nboth from Pu
 rdue University. Among his honors\, he is a Fellow of both the ACM\nand IE
 EE\, and a member of the NAE. He also received both the Eckert-Mauchly\naw
 ard and the B. Ramakrishan Rau award for lifetime contributions in compute
 r\narchitecture.\n
X-ALT-DESC:FMTTYPE=text/html:<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2//E
 N"><HTML><p><strong>Abstract:</strong></p>\n\n<p>Over my 50+ year career i
 n computer architecture research\, processor<br />\nperformance has improv
 ed by over three orders of magnitude. Those improvements<br />\nhave been 
 premised on improvements at every level of a long-standing stack of<br />\
 nabstractions. That stack is comprised of: (1) algorithms expressed in<br 
 />\nprogramming languages that are (2) converted by compilers into sequenc
 es of<br />\ninstructions that are (3) executed on computer architectures 
 that are (4)<br />\nimplemented using transistors in the latest technology
 .</p>\n\n<p>In this talk\, I am going focus on the computer architecture l
 evel of the stack\,<br />\nwhere execution of instructions occurs. At that
  level\, the intrinsic nature of<br />\ninstructions has remained relative
 ly constant\, but the techniques to execute<br />\nthem quickly have evolv
 ed significantly. At its heart\, however\, executing<br />\ninstructions i
 s a step-by-step process where each step depends on information<br />\ngen
 erated at prior steps. And therefore\, dealing with those dependencies is 
 what<br />\ndetermines the performance of the architecture. So in the firs
 t part of this<br />\ntalk\, I will frame a number of the techniques that 
 have been employed to<br />\noptimize performance as instances of a set of
  methods that can mitigate the<br />\nimpact of dependencies between (and 
 within) instructions.</p>\n\n<p>Although the abstraction stack and nature 
 of instructions has remained<br />\nremarkably constant over more than the
  last 50 years\, that is now likely to<br />\nchange. That is due to the e
 nd of long-term technology scaling trends (usually<br />\nreferred to as M
 oore&#39\;s Law and Denard scaling) that is limiting the ability to<br />\
 nkeep making improvements without changing stack or at least the interface
 s<br />\nbetween levels. So in the second part of the talk\, I will discus
 s my<br />\nphilosophical perspective on the important attributes of the e
 xisting stack and<br />\nprovide some thoughts on how to both preserve tho
 se attributes and improve<br />\nperformance.<br />\n<br />\n<strong>Bio:&
 nbsp\;</strong><br />\n&nbsp\; &nbsp\; &nbsp\; &nbsp\;<br />\nJoel S. Emer
  is a Professor of the Practice at MIT&#39\;s Electrical Engineering and<b
 r />\nComputer Science Department (EECS) and a member of the Computer Scie
 nce and<br />\nArtificial Intelligence Laboratory (CSAIL). He is also a Se
 nior Distinguished<br />\nResearch Scientist at Nvidia in Westford\, MA\, 
 where he is responsible for<br />\nexploration of future architectures as 
 well as modeling and analysis<br />\nmethodologies. Prior to joining Nvidi
 a\, he worked at Intel where he was an Intel<br />\nFellow and Director of
  Microarchitecture Research. Previously he worked at<br />\nCompaq and Dig
 ital Equipment Corporation (DEC).</p>\n\n<p>For over 50 years\, Dr. Emer h
 as held various research and advanced development<br />\npositions investi
 gating processor micro-architecture and developing performance<br />\nmode
 ling and evaluation techniques. He has made architectural contributions to
  a<br />\nnumber of VAX\, Alpha and X86 processors and is recognized as on
 e of the<br />\ndevelopers of the widely employed quantitative approach to
  processor performance<br />\nevaluation. He has also been recognized for 
 his contributions in the advancement<br />\nof deep learning accelerator d
 esign\, spatial and parallel architectures\,<br />\nprocessor reliability 
 analysis\, memory dependence prediction\, pipeline and cache<br />\norgani
 zation\, performance modeling methodologies and simultaneous<br />\nmultit
 hreading. He earned a doctorate in electrical engineering from the<br />\n
 University of Illinois in 1979. He received a bachelor&#39\;s degree with 
 highest<br />\nhonors in electrical engineering in 1974\, and his master&#
 39\;s degree in 1975 --<br />\nboth from Purdue University. Among his hono
 rs\, he is a Fellow of both the ACM<br />\nand IEEE\, and a member of the 
 NAE. He also received both the Eckert-Mauchly<br />\naward and the B. Rama
 krishan Rau award for lifetime contributions in computer<br />\narchitectu
 re.</p>\n</HTML>
CREATED;TZID=America/New_York:20260303T111255
URL;VALUE=URI:https://engineering.yale.edu/news-and-events/events/nyquist-l
 ecture-electrical-engineering-it-all-depends-personal-perspective-evolutio
 n-computer-architecture
DTSTART;TZID=America/New_York:20260402T160000
DTEND;TZID=America/New_York:20260402T170000
SEQUENCE:0
END:VEVENT
END:VCALENDAR
