I love to speak across disciplines and to non-academic communities. Science myths, especially in topics as hot as robotics and AI, ought to be dispelled as approachably and easily as possible, while maintaining the seriousness of the issues. I bring my research and technical expertise to life at comedy festivals, nerd gatherings, theatrical productions, and more.

Glasgow Skeptics Society Speaker

Glasgow, UK

Oct 2021

Nudge nudge, wink wink: How AI Uses Social Signals to Learn Who You Are

As technology and AI become prevalent in everyday life, we must critically examine what we want computers to know about who we are and how we think. This talk is an intro to AI's abilities to read physical and digital social signals, from head nods to mouse hovers. We'll discuss how this information is presently used to manipulate individuals and groups. We'll also dive into data transparency, sharing policies, and the attempts to regulate the datasphere.

Most Attended talk of Glasgow Skeptics Society in 2021.

Glasgow Science Festival Speaker

Glasgow, UK

Sept 2021

Organized film series on AI portrayal in the media, and spoke at events discussing AI's role in the environmental crisis (featuring COP26 in Glasgow).

Virtual Social Interaction Conference Speaker 

Glasgow, UK

July 2021* 

Abstract: Co-speech gestures of the hands transform mental constructs into physical forms and actions, and are widely shown to play a powerful role in face-to-face interaction. Our interest lies specifically in metaphoric gestures, which embody abstract concepts and reflect the relationship between thought, speech, and motion. These gestures not only increase speaker interpretability and viewer comprehension in both humans and virtual agents, but importantly can alter how the viewer qualitatively understands information presented by the speaker. Gesture behavior and interpretations of some gestures differ across cultures. For instance, emblematic gestures, such as the “thumbs-up,” are entirely culturally dependent. Similarly, frequency and amplitude of gesture performances vary widely across cultures. However, metaphoric gestures ground abstract concepts in physical motion. By the Embodied Cognition hypothesis, interpretations of these gestures may therefore be consistent across cultures, as all individuals experience the same physical world. Similar interpretations of the same metaphoric gestures would imply the same conceptual metaphors are driving gestures in individuals from different cultures. By decomposing metaphoric gestures into physical components, we hope to use cross-cultural interpretations of combinations of these components as a tool to study the potential for universality of physical embodiment of abstract concepts. Our interest is in modeling both the underlying processes that go from mental construct to gesture, as well as in modeling the perception of the behavior, specifically how those gestures then influence observers. We can in turn use the assessment of the latter perceptual model’s effectiveness in an interaction to determine a virtual human’s generation model, leading to more effective virtual human performances. The presented study focuses on a key step along this path, the perception of metaphoric gestures. In this crowd-sourced study, we present metaphoric gestures with origins in American and Japanese speakers to viewers of each culture and ask viewers to self-report interpretations of abstract notions seen in these gestures. These notions were gathered using thematic analysis of free-response interpretations of these gestures. Results from human studies indicate that interpretations of abstract notions that may be represented by these gestures, such as conflict and togetherness, differ significantly across cultures. This indicates that embodied signifiers of these concepts differ both between individuals and cultures, discouraging the idea of a universal mapping from physical motion to abstract concept.

*Note: as this Conference did not publish official proceedings, this is not a formal publication. 

FameLab Scotland Finalist

Edinburgh, UK

May 2020

FameLab Scottish first runner-up discussing what Smart technology really means and why calling it “smart” is perhaps a bit of a misnomer. View presentation here.

ThreeMinute Thesis Finalist

Edinburgh, UK

March 2020

3 Minute Thesis runner-up in postgraduate researchers at University of Glasgow. View presentation here

Crowdsourcing: The Good, The Bad, and The Ugly

University of Glasgow Psychology Department, UK

Oct 2019

Introduction to crowdsourcing for scientific data / workshop for Methods and Metascience group (professors and graduate students in UofG Psychology). Slides.


Pint of Science Speaker

Glasgow, UK

May 2019

17 Things You Never Knew About Social Robots. Number 10 Will SHOCK You! A general overview of the state of "Social Robotics," given at a pub in Glasgow. Received award for Audience Engagement (Comedy). slides



Interactions: The Future of AI (Pannelist)

Royal Scottish Theatre Company, UK

March 2019

Pannelist and speaker following a production of a play about the role of emotional AI in the future.



Github and You: A Guide to Commitment Issues

Glasgow Psychology Department, UK

March 2019

An introduction to using source control for psychology graduate students. slides



WECode Conference Speaker

Cambridge, USA

March 2018

Abstraction and API Design

Women in Technology Conference Speaker

Pheonix, AZ, USA

Sept 2017

There’s earnest self-improvement and criticism, and then there’s crippling self-doubt. Learn techniques to differentiate between subtlties of the two, and identify self-sabotaging behaviors you can mitigate to reach your highest potential.

SXSW Workshop Leader

Austin, TX, USA

March 2017

What is a social robot? What applications are social robots uniquely suited for, and how can we design interactions that are useful, and socially fulfilling? Hands-on demo of multiple social robot APIs and micro-hackathon.

Lesbians who Tech Flash Talk

San Francisco, CA, USA

Oct 2016

Emotions govern so much of human behavior – why are voice agents taking so long to catch up? What does the future of an emotional voice interface look like, and how will the ability to perceive and express emotions influence the development of voice interfaces in the future?

Boston ML Meetup Speaker

Boston, MA, USA

Oct 2015 - Feb 2016

It’s 2016, and humans don’t give instructions through terminals anymore. The future of interfaces is social. Natural language understanding, conversational dialogs, and subtle social cues are the new instruction set – what challenges exist in the vast field of HCI, and how can current machine learning techniques address them?

MLHacks Workshop Coordinator and Judge

Austin, TX, USA

Jan 2016

Slides