Juho Kim is an Assistant Professor in the School of Computing at KAIST, and directs KIXLAB (the KAIST Interaction Lab). His research in human-computer interaction focuses on building interactive systems that support interaction at scale: crowdsourcing and human computation, online education and learning at scale, civic engagement and collective action, and interactive data analytics and mining. He creates interactive and collaborative technology that empowers conventionally passive populations -- students receiving instruction and citizens influenced by social issues -- to be active and self-directed participants who initiate deeper learning and collective action. He often takes an interdisciplinary approach to his research, by connecting computer science with learning sciences and social sciences, and aims to deploy interactive systems to real users.

He earned his Ph.D. from MIT, M.S. from Stanford University, and B.S. from Seoul National University. In 2015-2016, he was a Visiting Assistant Professor and a Brown Fellow at Stanford University. He is a recipient of eight paper awards from ACM CHI, ACM Learning at Scale, and AAAI HCOMP, and the Samsung Fellowship.

If you're interested in working with me at KAIST, please email me.

Contact Info
Google Scholar
Blog (in Korean)

N1 Room 605, KAIST
291 Daehak-ro, Yuseong-gu
Daejeon, Korea 34141
Tel: (+82)-042-350-3570
Ph.D. Students
Minsuk Chang
Yoo Jin Lim
Sung-Chul Lee (co-advised by Jihee Kim)
rest of the team


Want to meet?
Office hours: 4-5pm Tue/Thu

Current Projects

Crowdy: Learnersourced Video Summary

Crowdy is a crowd-powered video learning interface, where learners collaboratively add interactive labels to videos to enhance the content.

RIMES: Multimedia Exercises for Video

RIMES allows teachers to embed interactive multimedia exercises within online lecture videos. Students can record audio, video, and ink-based answers, and teachers can review the responses.

LectureScape: Data-Driven Video Interaction

LectureScape leverages thousands of other learners' interaction history with lecture videos to add 2D, non-linear timeline, enhanced in-video search, and visual highlights.

BudgetWiser: Fact-Based Budget Discussion

BudgetWiser promotes public discussions around a government budget via collaborative fact-checking and tagging, interactive visualization, and contextual budget information extraction in a news article.

Content-Aware Kinetic Scrolling

Our novel scrolling technique dynamically applies kinetic friction around points of high interest within a web page while scrolling on a touchscreen device.

ToolScape: Enhancing How-to Videos

ToolScape captures work-in-progress images and step-by-step information inside video tutorials to help learn how-to skills in any domain. It features workflows to capture annotations from crowd workers, or learners watching the same video.

Cobi: Communitysourcing Conference Scheduling

Cobi integrates community process, constraint-solving intelligence, and end-user interface to help in the schedule process of large conferences.

TalkScape: Lecture Annotation Tool

How can we capture, interact with, and discuss important moments in a presentation or lecture video? TalkScape supports taking annotated snapshots, visualizing an outline, and deep-linking between the video and external resources.