*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************
Name: Medha Shekhar
Dissertation Defense Meeting
Date: Monday, July 12, 2021
Time: 8:30 AM
Location: Virtual, https://bluejeans.com/133861521/7201
Advisor: Dobromir Rahnev, Ph.D. (Georgia Tech)
Dissertation Committee Members:
Rick Thomas, Ph.D. (Georgia Tech)
Paul Verhaeghen, Ph.D. (Georgia Tech)
Christopher J. Rozell, Ph.D. (Georgia Tech)
Rani Moran, Ph.D. (University College London)
Title: How do humans give confidence? Comparing popular process models of confidence generation
Abstract: Humans have the metacognitive ability to assess the likelihood of their decisions being correct via estimates of confidence. Several theories have attempted to model the computational mechanisms that generate confidence. Yet, due to little work directly comparing these models using the same data, there is no consensus among these theories. In this study, we extensively compare twelve popular process models by fitting them to large datasets from two experiments in which participants completed a perceptual task with confidence ratings. Quantitative comparisons selected the best fitting model as one that postulates a single system for generating both choice and confidence judgments, where confidence is additionally corrupted by signal-dependent noise. These results contradict dual processing theories – according to which confidence and choice arise from coupled or independent systems. Model evidence from these data also failed to support popular notions that confidence is derived from post-decisional evidence, strictly decision-congruent evidence, or posterior probability computations. Further, qualitative analyses showed that the best fitting models were those that closely predicted individual variations in both primary task performance and metacognitive ability, while the worst performing models were relatively poor at predicting either or both. Together, these analyses establish a general framework for model evaluation that also provides qualitative insights into their successes and failures. Most importantly, these results, by quantifying evidence for theories about confidence, begin to reveal the nature of metacognitive computations.