DI.FCT.UNL - NOVA LINCS Distinguished Lecture Series

The distinguished lecture series of the Computer Science Department (DI) and NOVA Laboratory for Computer Science at FCT NOVA brings to the department some of the most influential researchers and practitioners in the field of computer science. This distinguished lecture series aims at giving students and researchers an opportunity to know more about some of the most groundbreaking work in computer science and the people who led this work, and will hopefully become a source of inspiration for the future career of the students in the department. The series hosts one talk every year, so that it can focus on outstanding computer scientists and engineers who are leaders in their fields of knowledge.

The inaugural lecture in this series was given in October 2012 by Turing award winner Prof. Barbara Liskov from the Massachusetts Institute of Technology. Below, you will find the history of previous distinguished lectures and the calendar for the currently scheduled events.

DI FCT NOVA - NOVA LINCS Distinguished Lecture #10

May 25th, 15:00 2022. Location: Location: Main Auditorium FCT NOVA.

Fernando Pereira, Google Research 

Does Language Mirror the Mind? A Personal Journey

Over 20 years ago, I wrote the following in an invited paper:

Given the enormous conceptual and technical difficulties of building a comprehensive theory of grounded language processing, treating language as an autonomous system is very tempting. However, there is a weaker form of grounding that can be exploited more readily than physical grounding, namely grounding in a linguistic context. Following this path, sentences can be viewed as evidence for other sentences through inference, and the effectiveness of a language processor may be measured by its accuracy in deciding whether a sentence entails another, or whether an answer is appropriate for a question.

This path turned out to be even more productive than I imagined, though with different technical tools than the ones I favored then. So, is language-internal modeling all we need for creating language-using systems that pass the (Turing?) test? I will illustrate the question with a sequence of examples drawn from the work of many collaborators and colleagues, not answering it decisively, but still marveling at how so much of the contents and processes of our minds can be mechanically inferred from our constant chatter.

Fernando Pereira short bio:

Fernando Pereira is Vice President and Engineering Fellow at Google, where he leads research and development in natural language understanding and machine learning.

His previous positions include chair of the Computer and Information Science department of the University of Pennsylvania, head of the Machine Learning and Information Retrieval department at AT&T Labs, and research and management positions at SRI International.

He received a Ph.D. in Artificial Intelligence from the University of Edinburgh in 1982, and has over 120 research publications on computational linguistics, machine learning, bioinformatics, speech recognition, and logic programming, as well as several patents.

He was elected AAAI Fellow in 1991 for contributions to computational linguistics and logic programming, ACM Fellow in 2010 for contributions to machine learning models of natural language and biological sequences, and ACL Fellow for contributions to sequence modeling, finite-state methods, and dependency and deductive parsing. He was president of the Association for Computational Linguistics in 1993.


DI FCT NOVA - NOVA LINCS Distinguished Lecture #9

January 26th, 15:00 2021. Location: The lecture will be offered online on a ZOOM Session TBA here.

Scott Aaronson, The University of Texas at Austin, and Quantum Information Center, USA 

Quantum Computational Supremacy and Its Applications

Last fall, a team at Google made the first-ever claim of "quantum computational supremacy"---that is, a clear quantum speedup over a classical computer for some task---using a 53-qubit programmable superconducting chip called Sycamore.

Last month, a group at University of Science and Technology of China  made a claim of quantum supremacy, using "BosonSampling" (a proposal by me and Alex Arkhipov in 2011) with 50-70 photons in an optical network. In addition to engineering, these accomplishments built on a decade of research in quantum complexity theory.

This talk will discuss questions like: what exactly were the contrived computational problems that were solved? How does one verify the outputs using a classical computer? And how sure are we that the problems are indeed classically hard? I'll end with a proposed application for these sampling based quantum supremacy experiments---namely, the generation of certified random bits, for use (for example) in proof-of-stake cryptocurrencies---that I've been developing and that Google is now working to demonstrate.

Prof. Scott Aaronson bio:

Scott Aaronson is David J. Bruton Centennial Professor of Computer Science at the University of Texas at Austin. He received his bachelor's from Cornell University and his PhD from University of California at Berkeley. Before coming to University of Texas at Austin, he spent nine years as a professor in Electrical Engineering and Computer Science at the Massachussets Institute of Technology. Aaronson's research in theoretical computer science has focused mainly on the capabilities and limits of quantum computers.

His first book, Quantum Computing Since Democritus, was published in 2013 by Cambridge University Press. He received the National Science Foundation’s Alan T. Waterman Award, the United States PECASE Award, the Vannevar Bush Fellowship, and the Tomassoni-Chisesi Prize in Physics, and is a  Association for Computing Machinery Fellow since 2019.

He is frequently engaged in the non-academic media, such as Closer To Truth. Science News, The Age, ZDNet, Slashdot, New Scientist, The New York Times,  Forbes magazine, and is blog "Shtetl-Optimized" is a world-wide reference on Quantum and many other science topics. 


DI FCT NOVA - NOVA LINCS Distinguished Lecture #8

December 19th, 14:30 2019. Location: Main Auditorium FCT NOVA.

Hiroshi Ishii, Massachusetts Institute of Technology, USA

Making Digital Tangible: the battle against the pixel empire

Today's mainstream Human-Computer Interaction (HCI) research primarily addresses functional concerns – the needs of users, practical applications, and usability evaluation. Tangible Bits and Radical Atoms are driven by vision and carried out with an artistic approach. While today's technologies will become obsolete in one year, and today's applications will be replaced in 10 years, true visions – we believe – can last longer than 100 years.

Tangible Bits seeks to realize seamless interfaces between humans, digital information, and the physical environment by giving physical form to digital information and computation, making bits directly manipulatable and perceptible both in the foreground and background of our consciousness (peripheral awareness).Our goal is to invent new design media for artistic expression as well as for scientific analysis, taking advantage of the richness of human senses and skills we develop throughout our lifetime interacting with the physical world, as well as the computational reflection enabled by real-time sensing and digital feedback.

Radical Atoms leaps beyond Tangible Bits by assuming a hypothetical generation of materials that can change form and properties dynamically, becoming as reconfigurable as pixels on a screen. Radical Atoms is the future material that can transform its shape, conform to constraints, and inform the users of their affordances. Radical Atoms is a vision for the future of Human- Material Interaction, in which all digital information has a physical manifestation, thus enabling us to interact directly with it.

I will present the trajectory of our vision-driven design research from Tangible Bits towards Radical Atoms, illustrated through a variety of interaction design projects that have been presented and exhibited in Media Arts, Design, and Science communities. These emphasize that the design for engaging and inspiring tangible interactions requires the rigor of both scientific and artistic review, encapsulated by my motto, “Be Artistic and Analytic. Be Poetic and Pragmatic.”

Prof. Hiroshi Ishii bio:

Hiroshi Ishii is the Jerome B. Wiesner Professor of Media Arts and Sciences at the MIT Media Laboratory. After joining the Media Lab in 1995, he founded the Tangible Media Group to make digital tangible by giving physical form to digital information and computation. Here, he pursues his visions of Tangible Bits and Radical Atoms that will transcend the Painted Bits of Graphical User Interfaces, the current dominant paradigm of Human-Computer Interaction.

He is recognized as a founder of “Tangible User Interfaces (TUI),” a new research genre based on the CHI’97 “Tangible Bits” paper presented with Brygg Ullmer in Atlanta, Georgia, which led to the spinoff ACM International Conference on Tangible, Embedded and Embodied Interaction (TEI) from 2007.

Prior to joining the MIT Media Lab, Ishii led the CSCW (Computer-Supported Cooperative Work) research group at NTT Human Interface Laboratories Japan from 1988-1994, where he and his team invented the TeamWorkStation and ClearBoard. He received a B.E. degree in electronic engineering, and M.E. and Ph.D. degrees in computer engineering from Hokkaido University, Japan, in 1978, 1980 and 1992, respectively.

Some of his ideas have contributed to the design of the gestures interface used by Tom Cruise in Stephen Spielberg‘s “Minority Report”.

In 2019, he won SIGCHI Lifetime Research Award for his fundamental and influential research contributions to the field of human-computer interaction in the past quarter century.


DI FCT NOVA - NOVA LINCS Distinguished Lecture #7

December 20th, 14:30 2018. Location: Main Auditorium FCT UNL.

Manuela M. Veloso, Carnegie Mellon University and JPMorgan

Towards a Lasting Human-AI Interaction 

Artificial intelligence, including extensive data processing, decision making and execution, and learning from experience, offers new challenges for an effective human-AI interaction. This talk delves into multiple roles humans can have in such interaction, as well as the underlying challenges to AI in particular in terms of  collaboration and interpretability. The presentation is grounded within the context of autonomous mobile service robots, and applications to other areas.

Prof. Manuela Veloso short bio:

Manuela M. Veloso recently joined J.P.Morgan Chase to create and head an Artificial Intelligence (AI) Research Center. Veloso is on leave from Carnegie Mellon University (CMU) where she is Herbert A. Simon University Professor in the School of Computer Science, and where she was the Head of the Machine Learning Department until June 2018. She researches in AI, Robotics, and Machine Learning. At CMU, she founded and directs the CORAL research laboratory, for the study of autonomous agents that Collaborate, Observe, Reason, Act, and Learn. Veloso and her students research a variety of autonomous robots, including mobile service robots and soccer robots.  Veloso is AAAI Fellow, ACM Fellow, AAAS Fellow, and IEEE Fellow, Einstein Chair Professor of the Chinese National Academy of Science, the co-founder and past President of RoboCup, and past President of AAAI. See www.cs.cmu.edu/~mmv for further information, including publications. 

DI FCT NOVA - NOVA LINCS Distinguished Lecture #6 

October 11th, 14:30 2017. Location: Main Auditorium FCT UNL.

Pascal Van Hentenryck,  University of Michigan, USA

Data Science for Mobility

The availability of massive data sets, combined with progress in communication technologies, connected and automated vehicles, and analytics, has the potential to revolutionize mobility for entire population segments. This talk reviews this unique opportunity, from its potential societal impact, to the development of new mobility systems, and the computational and data science powering them. In particular, the talk presents recent developments in on-demand multimodal transit systems and ride sharing on real case studies, as well as progress in evidence-based optimization and differential privacy  to meet current and future challenges.

Prof. Pascal Van Hentenryck short bio:

Pascal Van Hentenryck is the Seth Bonder Collegiate Professor of Engineering at the University of Michigan. He is a professor of Industrial and Operations Engineering, a professor of Electrical Engineering and Computer Science, and a core faculty in the Michigan Institute of Data Science. Van Hentenryck's current research is at the intersection of optimization and data science with applications in energy, transportation, and resilience. He is a fellow of INFORMS, a fellow of AAAI, and the recipient of two honorary degrees. He was awarded the 2002 INFORMS ICS Award for research excellence in operations research and compute science, the 2006 ACP Award for research excellence in constraint programming, the 2010-2011 Philip J. Bray Award for Teaching Excellence at Brown University, and a 2013 IFORS Distinguished Speaker award. He is the author of five MIT Press books and has developed several optimization systems that are widely used in academia and industry. He will be co-program chair of the AAAI conference in 2019. 

DI FCT NOVA - NOVA LINCS Distinguished Lecture #5 

September 28th, 14:30 2016. Location: Main Auditorium FCT UNL.

Luca Cardelli, Microsoft Research and University of Oxford, UK 

Telling Molecules What To Do

Digital computers allow us to manipulate information systematically, leading to recent advances in our ability to structure our society and to communicate in richer ways. They also allow us to orchestrate physical forces, transforming and optimizing our manufacturing processes. What they cannot do very well, is to interact directly with biological organisms or in general orchestrate molecular arrangements. Thanks to biotechnology, nucleic acids (DNA/RNA) are particularly effective 'user-programmable' entities at the molecular scale. They can be directed to assemble nano-scale structures, to produce physical forces, to act as sensors and actuators, and to do general computation in between. We will be able to interface them with biological machinery to detect and cure diseases at the cellular level under program control. The theory of computability directed the design of digital computers, and it can now inform the development of new computational fabrics, at the molecular level, that will eventually give us control of an entirely new domain of reality.

Prof Luca Cardelli short bio:

Luca Cardelli has a Ph.D. in computer science from the University of Edinburgh. He worked at Bell Labs, Murray Hill, from 1982 to 1985, and at Digital Equipment Corporation, Systems Research Center in Palo Alto, from 1985 to 1997, before assuming a position at Microsoft Research, in Cambridge UK, where he was head of the Programming Principles and Tools and Security groups until 2012. Since 2014 he is also a Royal Society Research Professor at the University of Oxford.

His main interests are in programming languages and concurrency, and more recently in programmable biology and nanotechnology. He is a Fellow of the Royal Society, a Fellow of the Association for Computing Machinery, an Elected Member of the Academia Europaea, and an Elected Member of the Association Internationale pour les Technologies Objets. His web page is at lucacardelli.name.

DI FCT NOVA - NOVA LINCS Distinguished Lecture #4

Also featured as part of the Department's 40th anniversary commemorative session. 

November 25th, 2015. Location: Main Auditorium FCT UNL.

Jeannette Wing, Microsoft Research, USA 

Computational Thinking

My vision for the 21st Century:

Computational thinking will be a fundamental skill used by everyone in the world.  

To reading, writing, and arithmetic, we should add computational thinking to every child's analystical ability.  Computational thinking involves solving problems, designing systems, and understanding human behavior by drawing on the concepts fundamental to computer science.  Thinking like a computer scientist means more than being able to program a computer.  It requires the ability to abstract and thus to think at multiple levels of abstraction.  In this talk I will give many examples of computational thinking, argue that it has already influenced other disciplines, and promote the idea that teaching computational thinking can not only inspire future generations to enter the field of computer science but benefit people in all fields

Dr Jeannette Wing short bio:

Jeannette M. Wing is Corporate Vice President, Microsoft Research. She is on leave from Carnegie Mellon University, where she is President's Professor of Computer Science and twice served as the Head of the Computer Science Department.  From 2007-2010 she was the Assistant Director of the Computer and Information Science and Engineering Directorate at the National Science Foundation.  She received her S.B. and S.M. degrees in Computer Science and Engineering in 1979 and her Ph.D. degree in Computer Science in 1983, all from the Massachusetts Institute of Technology.

Professor Wing's general research interests are in the areas of trustworthy computing, specification and verification, concurrent and distributed systems, programming languages, and software engineering. Her current interests are in the foundations of security and privacy. She was or is on the editorial board of twelve journals, including the Journal of the ACM and Communications of the ACM.

She is currently Chair of the DARPA Information Science and Technology (ISAT) Board and Chair-Elect of the AAAS Section on Information, Computing and Communications.  She has been a member of many other advisory boards, including: Networking and Information Technology (NITRD) Technical Advisory Group to the President's Council of Advisors on Science and Technology (PCAST), National Academies of Sciences' Computer Science and Telecommunications Board, ACM Council, and Computing Research Association Board.  She served as co-chair of NITRD from 2007-2010.  She was on the faculty at the University of Southern California, and has worked at Bell Laboratories, USC/Information Sciences Institute, and Xerox Palo Alto Research Laboratories.  She received the CRA Distinguished Service Award in 2011 and the ACM Distinguished Service Award in 2014.  She is a member of Sigma Xi, Phi Beta Kappa, Tau Beta Pi, and Eta Kappa Nu.  She is a Fellow of the American Academy of Arts and Sciences, American Association for the Advancement of Science, the Association for Computing Machinery (ACM), and the Institute of Electrical and Electronic Engineers (IEEE).

DI FCT NOVA - NOVA LINCS Distinguished Lecture #3

September, 17th, 2014 - 16h00m (Main Auditorium FCT UNL)

Leslie Lamport, Microsoft Corporation, Seattle, USA

An Incomplete History of Concurrency Chapter 1. 1965–1977

It is insufficiently considered that men more often require to be reminded than informed. ~Samuel Johnson

A personal view of the first dozen years of the modern field of concurrent and distributed computing, viewed from the perspective of 2014. Further chapters are left for others to write.

Dr Leslie Lamport short bio:

Leslie Lamport is a Principal Researcher at Microsoft Research. He received the IEEE Emanuel R. Piore Award for his contributions to the theory and practice of concurrent programming and fault-tolerant computing.  He was also awarded the Edsger W. Dijkstra Prize in Distributed Computing for his paper “Reaching Agreement in the Presence of Faults.” He won the IEEE John von Neumann Medal and was also elected to the U.S. National Academy of Engineering and the U.S. National Academy of Sciences.

Prior to his current position, his career included extended tenures at SRI International and Digital Equipment Corporation (later Compaq Corporation). The author or co-author of nearly 150 publications on concurrent and distributed computing and their applications, he holds a B.S. degree in mathematics from Massachusetts Institute of Technology as well as M.S. and Ph.D. degrees in mathematics from Brandeis University.

Leslie Lamport was awarded the 2013 A.M. Turing Award.

Dr Leslie Lamport short bio at Wikipedia.

DI FCT NOVA Distinguished Lecture #2

October, 9th, 2013 - 14h30m (Main Auditorium FCT UNL)

Thomas Henzinger, IST Austria

Quantitative Reactive Modeling

Formal verification aims to improve the quality of hardware and software by detecting errors before they do harm.  At the basis of formal verification lies the logical notion of correctness, which purports to capture whether or not a circuit or program behaves as desired.  We suggest that the boolean partition into correct and incorrect systems falls short of the practical need to assess the behavior of hardware and software in a more nuanced fashion against multiple criteria such as function, performance, cost, reliability, and robustness.  For this purpose, we propose quantitative fitness measures for reactive models of concurrent and embedded systems.

Besides measuring the "fit" between a system and a requirement numerically, the goal of the ERC project QUAREM (Quantitative Reactive Modeling) is to obtain quantitative generalizations of the paradigms on which the success of qualitative reactive modeling rests, such as compositionality, abstraction, model checking, and synthesis.

Professor Thomas A. Henzinger short bio:

Thomas A. Henzinger is President of IST Austria (Institute of Science and Technology Austria). He holds a Dipl.-Ing. degree in Computer Science from Kepler University in Linz, Austria, an M.S. degree in Computer and Information Sciences from the University of Delaware, a Ph.D. degree in Computer Science from Stanford University (1991), and a Dr.h.c. degree from Fourier University in Grenoble, France. He was Assistant Professor of Computer Science at Cornell University (1992-95), Assistant Professor (1996-97), Associate Professor (1997-98), and Professor (1998-2004) of Electrical Engineering and Computer Sciences at the University of California, Berkeley. He was also Director at the Max-Planck Institute for Computer Science in Saarbruecken, Germany (1999) and Professor of Computer and Communication Sciences at EPFL in Lausanne, Switzerland (2004-09). His research focuses on modern systems theory, especially models, algorithms, and tools for the design and verification of reliable software, hardware, and embedded systems. His HyTech tool was the first model checker for mixed discrete-continuous systems. He is an ISI highly cited researcher, a member of Academia Europaea, a member of the German Academy of Sciences (Leopoldina), a member of the Austrian Academy of Sciences, a Fellow of the ACM, and a Fellow of the IEEE. He has received the Wittgenstein Award of the Austrian Science Fund (FWF) and an ERC Advanced Investigator Grant.

 DI FCT NOVA Distinguished Lecture #1

October, 3th, 2012 - 14h30m (Main Auditorium FCT UNL)

Barbara Liskov, Massachusetts Institute of Technology

Programming the Turing Machine

Turing provided the basis for modern computer science. However there is a huge gap between a Turing machine and the kinds of applications we use today. This gap is bridged by software, and designing and implementing large programs is a difficult task. The main way we have of keeping the complexity of software under control is to make use of abstraction and modularity.

This talk will discuss how abstraction and modularity are used in the design of large programs, and how these concepts are supported in modern programming languages. It will also discuss what support is needed going forward.

Professor Barbara Liskov short bio:

Barbara Liskov is an Institute Professor at MIT and also Associate Provost for Faculty Equity. She is a member of the National Academy of Engineering and the National Academy of Sciences, a fellow of the American Academy of Arts and Sciences, and a fellow of the ACM. She received the ACM Turing Award in 2009, the ACM SIGPLAN Programming Language Achievement Award in 2008, the IEEE Von Neumann medal in 2004, a lifetime achievement award from the Society of Women Engineers in 1996, and in 2003 was named one of the 50 most important women in science by Discover Magazine. Her research interests include distributed systems, replication algorithms to provide fault-tolerance, programming methodology, and programming languages. Her current research projects include Byzantine-fault-tolerant storage systems and online storage systems that provide confidentiality and integrity for the stored information.