KDD Trustworthy AI Day 2022 will take place on August 15, 2022, 8:30am-5:00pm EDT. To attend the event, please register for the KDD 2022 Conference. You can register using the one “One-Day Conference” option if you only want to attend the Trustworthy AI Day.

8:30am 8:40am Opening Remarks
8:40am 9:40am Keynote #1 Elham Tabassi (NIST - Chief of Staff, Information Technology Laboratory)
  Title: AI Risk Management (🔗)

Session Chair: Wei Wang (UCLA)
9:30am 10:00am Coffee Break
10:00am 11:05am Invited Talks Brian Stanton (NIST - Project Lead for AI User Trust)
  Title: Trust and Perception of an AI System (🔗)
David James Marcos (Microsoft - Director, Governance & Enablement: Office of Responsible AI)
  Title: Responsible AI: Building out Practical Governance (🔗)
Dinesh Verma (IBM Research - CTO for Edge computing)
  Title: Trusting the outcomes of AI models: Experiences from Applications of AI in IoT Solutions (🔗)
Santosh Kumar (U. Memphis - Director, NIH NIBIB mHealth Center
for Discovery, Optimization, and Translation of Temporally-Precise Interventions)
  Title: Challenges and Opportunities in Trustworthy AI for Health and Wellness (🔗)

Session Chair: Yizhou Sun (UCLA)
11:05pm 12:00pm Panel Discussion Panelists: Elham Tabassi, Brian Stanton, David James Marcos, Dinesh Verma, Santosh Kumar

Panel Moderator: Mani Srivastava (UCLA)
12:00pm 1:00pm Lunch
1:00pm 2:00pm Keynote #2 James Zou (Stanford University)
  Title: Debugging and editing AI models using natural language (🔗)

Session Chair: Mani Srivastava (UCLA)
2:00pm 3:00pm Invited Talks Q. Vera Liao (Microsoft Research Montréal)
  Title: From trustworthy AI to appropriate trust: lessons from human-centered explainable AI (🔗)
Jiaqi Ma (Harvard University)
  Title: The Unique Challenges in Trustworthy Graph Machine Learning (🔗)
John P. Dickerson (University of Maryland)
  Title: On the Responsible Use of Machine Learning in Market Design (🔗)

Session Chair: Wei Wang (UCLA)
3:00pm 3:30pm Coffee Break
3:30pm 4:10pm Invited Talks Susan Aaronson (George Washington University)
  Title: Our Data Driven Future Needs a Rethink: Data Governance Ain't Working (🔗)
Karen Levy (Cornell University)
  Title: AI and Data Governance (🔗)
Anne Washington (New York University)
  Title: KDD in the public interest (🔗)

Session Chair: Yizhou Sun (UCLA)
4:10pm 5:00pm Panel Discussion Panelists: James Zou, Jiaqi Ma, John P. Dickerson,
Susan Aaronson, Karen Levy, Anne Washington

Panel Moderator: David James Marcos (Microsoft)
5:00pm Closing Remarks




Talk and Speaker Details

Elham Tabassi

Title: AI Risk Management

Abstract: AI systems sometimes do not operate as intended because they are making inferences from patterns observed in data rather than a true understanding of what causes those patterns. Ensuring that these inferences are helpful and not harmful in particular use cases – especially when inferences are rapidly scaled and amplified – is fundamental to trustworthy AI. While answers to the question of what makes an AI technology trustworthy differ, there are certain key characteristics which support trustworthiness, including accuracy, explainability and interpretability, privacy, reliability, robustness, safety, security (resilience) and mitigation of harmful bias. There also are key guiding principles to take into account such as accountability, fairness, and equity. Cultivating trust and communication about how to understand and manage the risks of AI systems will help create opportunities for innovation and realize the full potential of this technology.
This presentation overviews NIST’s effort in developing a framework to better manage risks to individuals, organizations, and society associated with AI. The NIST Artificial Intelligence Risk Management Framework (AI RMF or Framework) is intended for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.

Biography: Elham Tabassi is the Chief of Staff in the Information Technology Laboratory (ITL) at the National Institute of Standards and Technology (NIST). She leads NIST Trustworthy and Responsible AI program that aims to cultivate trust in the design, development, and use of AI technologies by improving measurement science, standards, and related tools in ways that enhance economic security and improve quality of life. She has been working on various machine learning and computer vision research projects with applications in biometrics evaluation and standards since she joined NIST in 1999. She is a member of the National AI Resource Research Task Force, a senior member of IEEE, and a fellow of Washington Academy of Sciences.

Brian Stanton

Title: Trust and Perception of an AI System

Abstract: The artificial intelligence (AI) revolution is upon us, with the promise of advances such as driverless cars, smart buildings, automated health diagnostics and improved security monitoring. Many current efforts are aimed to measure system trustworthiness through measurements of Accuracy, Reliability, and Explainability, among other system characteristics. While these characteristics are necessary, determining that the AI system is trustworthy because it meets its system requirements won’t ensure widespread adoption of AI. It is the user, the human affected by, the AI who ultimately places their trust in the system.

Biography: Brian Stanton (brian.stanton@nist.gov) is a Cognitive Scientist in the Visualization and Usability Group at the National Institute of Standards and Technology where, for the last six years he has been the leading researcher on the Artificial Intelligence User Trust project. He has worked on biometric projects for the Department of Homeland Security, Federal Bureau of Investigation’s Hostage Rescue Team, and with Latent Fingerprint examiners. Previously he worked in private industry designing user interfaces for air traffic control systems and B2B web applications.

David James Marcos

Title: Responsible AI: Building out Practical Governance

Abstract: Practical and scalable governance is critical when developing responsible AI products and solutions. Microsoft is operationalizing responsible AI through a coordinated cross-company effort as the company puts its principles into practice. This talk will provide an overview of Microsoft’s approach and journey, discussing building blocks of our responsible AI program and the practical aspects of building and institutionalizing a culture of responsible AI across the company.

Biography: David Marcos leads the governance and enablement team within Microsoft’s Office of Responsible AI, driving cross-company efforts to institutionalize AI governance, awareness, and training. Prior to his current position, Mr. Marcos led the development of Microsoft’s Responsible AI compliance capabilities as part of the Ethics & Society team in Microsoft’s Cloud & Artificial Intelligence division. Mr. Marcos was also previously Chief Privacy Officer of Microsoft’s Cloud & Artificial Intelligence division, driving governance and privacy engineering solutions for GDPR. Previous to employment with Microsoft, Mr. Marcos worked for the National Security Agency, holding a variety of positions, including technical director of the NSA Office of Civil Liberties and Privacy, deputy technical director of the NSA Office of the Director of Compliance, and privacy research lead in the NSA Research Directorate. David specializes in governance, privacy, and compliance, focusing on legal automation and ethical computation in cloud technologies. Mr. Marcos holds a B.S. in Computer Engineering from Penn State and an M.S. in Strategic Intelligence from the National Intelligence University. Mr. Marcos is both a Certified Information Privacy Manager and Technologist (CIPM/CIPT).

Dinesh Verma

Title: Trusting the outcomes of AI models: Experiences from Applications of AI in IoT Solutions

Abstract: Although the applications of AI and Machine Learning holds the promise of significant improvements in creating IoT solutions, a careless application of AI may do more harm than good. Application of AI needs to be done with a careful understanding of the assumptions underlying the data for training, and exploring the differences in the training environment and the operational environment. In the course of deploying AI based solutions to tasks such as detection of IoT devices in the network, or in the use of acoustics for various IoT solutions, we came across several challenges in making AI based solutions work in a reliable and trustworthy manner. On those experiences, we have drawn up a set of best practices for use of AI technologies in IoT solutions to develop resilient and trustworthy solutions. We believe these best practices should generalize to applications of AI in general, and would provide an overview of the same in the talk.

Biography: Dinesh C. Verma is a Fellow of UK Royal Academy of Engineering, an IEEE Fellow and an IBM Fellow. Currently, he is working as the Chief Scientist of the Research Consulting Program with a focus on US Public Sector. He has authored 11 books, 150+ technical papers and 185+ U.S. patents. He has chaired/vice-chaired IEEE technical committee on computer communications, as well as IEEE Internet technical committee. He has served on various program committees and editorial boards. He is a member of the IBM Academy of Technology, an IBM Master Inventor, and won several IBM internal technical awards. He has contributed to several IBM products and service offerings including significant contributions to server networking stack, network management products, edge computing and cellular network analytics. He has led several multi-national multi-organizational research programs. More details about Dinesh can be seen at http://ibm.biz/dineshverma

Santosh Kumar

Title: Challenges and Opportunities in Trustworthy AI for Health and Wellness

Abstract: AI is regarded as the most promising tool to improve the quality of health care while reducing cost. It can be employed in many stages of care, including AI-assisted diagnosis from radiological images, AI-enabled robotic surgeries, AI-enabled wearables to remotely detect early signs of disease onset or deterioration, managing medication compliance and administration via AI-enabled conversational robots, assisting with post-treatment recovery via AI-powered virtual therapists, and surgical training.

As the cost of failure in AI in many of these cases can result in health deterioration and threaten life, several fundamental scientific and engineering challenges need to be successfully resolved so that AI-enabled systems can gain and retain trust from various stakeholders. The incorporation of AI in healthcare decision-making, devices, and procedures also presents legal, regulatory, and ethical issues that are at their core about trust and trustworthiness.

Biography: Santosh Kumar is the Lillian & Morrie Moss Professor of Computer Science at University of Memphis and Director of NIH-funded mHealth research centers called MD2K and mDOT. His research develops wearable AI to enable the development, optimization, and privacy-aware deployment of sensor-triggered health interventions. Open-source software developed by his team has been used to conduct scientific studies nationwide, producing hundreds of terabytes of wearable sensor data. His team has used these data to develop AI models for detecting stress, smoking, craving, cocaine use, brushing, and flossing from wearables.

James Zou

Title: Debugging and editing AI models using natural language

Abstract: Continuously understanding how AI makes mistakes and correcting these mistakes are important steps for building trustworthy systems. I will discuss some recent advances in using natural language to characterize how, where and why an AI model makes mistakes on specific slices of data. Then we will discuss how to edit models to correct some mistakes by providing it with high-level conceptual feedback.

Biography: James Zou is an assistant professor at Stanford University. He works on making machine learning more reliable, human-compatible and mathematically sound. He also works on responsible deployment of AI in healthcare and medicine. He has received a Sloan fellowship, Chan-Zuckerberg fellowship, NSF CAREER, a Top Ten Clinical Research Achievement Award, and faculty awards from Google, Amazon, Tencent and Adobe.

Q. Vera Liao

Title: From trustworthy AI to appropriate trust: lessons from human-centered explainable AI

Abstract: Explainability is often considered one of the pillars of trustworthy AI. The past few years have seen a surge of interest in algorithms, methods, and toolkits to make AI explainable, with one goal, among others, being engendering trust in users. However, empirical studies that examine people’s interactions with AI explanations have shown mixed results of their effectiveness and warned that even technical sound explanations can potentially result in harmful over-trust and over-reliance. In this talk, I will discuss lessons from research on human-centered explainable AI, and argue that technology creators’ responsibility is not limited to AI trustworthiness, but also responsibly communicating the trustworthiness to ensure appropriate and equitable user trust. I will also draw on social science and human-computer interaction (HCI) literature on trust in technologies to suggest paths forward for responsibly building trust in AI.

Biography: Q. Vera Liao is a Principal Researcher at Microsoft Research Montréal, where she is part of the FATE (Fairness, Accountability, Transparency, and Ethics of AI) group. Her current research interests are in human-AI interaction, explainable AI, and responsible AI. Prior to joining MSR, she worked at IBM T.J. Watson Research Center, and studied at the University of Illinois at Urbana-Champaign and Tsinghua University. Her research received multiple paper awards at ACM CHI and IUI. She currently serves as the Co-Editor-in-Chief for Springer HCI Book Series, in the Editors team for ACM CSCW conferences, and on the Editorial Board of ACM Transactions on Interactive Intelligent Systems (TiiS).

Jiaqi Ma

Title: The Unique Challenges in Trustworthy Graph Machine Learning

Abstract: TBD

Biography: TBD

John P. Dickerson

Title: On the Responsible Use of Machine Learning in Market Design

Abstract: TBD

Biography: TBD

Susan Aaronson

Title: Our Data Driven Future Needs a Rethink: Data Governance Ain’t Working

Abstract: TBD

Biography: Susan Ariel Aaronson is a CIGI senior fellow. She is an expert in international trade, digital trade, good governance, and human rights. Aaronson is particularly interested and writes on how the digital economy is changing governance and human rights. She is currently writing on comparative advantage in data, comparing how nations govern data, and how virtual reality will challenge our existing apporach to governance.

Susan is also research professor of international affairs and cross-disciplinary fellow at George Washington University’s Elliott School of International Affairs, where she directs the Digital Trade and Data Governance Hub. The Hub educates policy makers and the public on domestic and international data governance. The Hub also maps the governance of personal, public and proprietary data around the world to illuminate the state of data governance.

Susan is the former Minerva Chair at the National War College. She is the author of six books and more than 50 scholarly articles. Her work has been funded by major international foundations including the MacArthur, Hewlett, Ford Koch, and Rockefeller Foundations; governments such as the Netherlands, the United States and Canada; international organizations such as the United Nations, International Labour Organization and the World Bank; and US corporations including Google, Ford Motor and Levi Strauss. She loves to do triathlons and study ballet and admits she Is mediocre at these activities.

Karen Levy

Title: AI and Data Governance

Abstract: TBD

Biography: Karen Levy is an Associate Professor of Information Science at Cornell University and Associated Faculty at Cornell Law School. She is a sociologist and lawyer whose research focuses on legal, social, and ethical dimensions of data-intensive technologies.

Anne Washington

Title: KDD in the public interest

Abstract: TBD

Biography: Anne L. Washington is Public Interest Technologist serving as an Assistant Professor of Data Policy at the NYU Steinhardt School. Her expertise on public sector information currently addresses the emerging governance needs of data science. The National Science Foundation has funded her research multiple times including a prestigious 5-year NSF CAREER grant on open government data. Her data-intensive projects draw on both interpretive research methods and computational text analysis. She holds an undergraduate degree in computer science from Brown University and a doctorate in Information Systems and Technology Management from The George Washington University School of Business.