We have *four* post-doctoral researchers openings in UT Austin’s School of Information:
We will begin to review applications and interview candidates starting on May 1st, 2020. (Depending on COVID-19, interviews may be conducted remotely via Zoom). Questions about these positions should be sent to .
Of the 9 areas listed in the call, I drafted three of these and expect to closely collaborate with the postdocs we hire in these areas. These areas are listed as BRF5, 6, and 7 in the call and copied at bottom. If you apply to one of these three areas, in addition to submitting your application to Interfolio, please also email me a copy of your CV and Cover Letter.
My lab’s research spans AI algorithms and HCI design across information retrieval (IR), crowdsourcing / human computation, and natural language processing (NLP). These postdocs will also have the opportunity to participate in Good Systems, UT Austin’s campus-wide Grand Challenge to design AI technologies that maximally benefit society.
My Research Lab: http://ir.ischool.utexas.edu
UT Austin Good Systems: http://goodsystems.utexas.edu
Our school is a unique mix of interdisciplinary expertise, offering cutting-edge research and education in technical, human, and social aspects of information. Our program is consistently ranked among the top programs nationally.
UT Austin’s iSchool: https://www.ischool.utexas.edu/about/vision
Computing Research @ UT Austin: http://computing.utexas.edu
Life in Austin: https://www.ischool.utexas.edu/about/about-austin
Our university’s motto is that, “What Starts Here Changes the World”, and our Ph.D. program will challenge you to do cutting-edge research. We hope you will join us in our mission to drive new research that truly changes the world and develops the new technologies of tomorrow.
BRF5. Designing Fair Al and Algorithms
AI systems trained and evaluated on data may not only reproduce data bias but even amplify it. Unfortunately, even defining data bias is difficult, let alone detecting and mitigating it. It is difficult to detect and eliminate bias from data, as it can creep in through various insidious ways, and determining the best algorithmic criterion of fairness is very challenging and has been largely devoid of human input. To remedy this, we are investigating human-centered bias detection, measurement, evaluation, and feedback to complement existing algorithmic approaches. We are interested in candidates from varying disciplines, including information and computer science, philosophy, design, psychology, sociology, and science and technology studies. The ideal candidate would appreciate and be familiar with diverse views of fairness: algorithmic, societal, philosophical, etc. This Postdoctoral Fellow will be primarily mentored by Amelia Acker, Andrew Dillon, Ken Fleischmann, Min Kyung Lee, and Matthew Lease. The Fellow will also be invited to engage with us in Good Systems, a UT Austin Grand Challenge to design AI technologies that benefit society.
BRF6. Designing Human-Centered AI to Fight Misinformation
Misinformation and disinformation threaten societal ability to find and recognize reliable information online, with inaccurate information risking harm across governmental, personal, and commercial spheres (e.g., voting, health care decisions, and financial markets). Automated AI models may help with fact-checking, but when many people distrust even well-known news outlets and fact-checking services, how can an AI model explain its outputs to people and earn human trust? How can we design human-centered AI models and interfaces that effectively amplify human abilities? The ideal candidate would bring an appreciation and familiarity with both HCI and AI. This Postdoctoral Fellow will be primarily mentored by Amelia Acker, Andrew Dillon, Jacek Gwidzka, and Matthew Lease. The fellow will also be invited to engage with us in Good Systems, a UT Austin Grand Challenge to design AI technologies that benefit society.
BRF7. Datasets and Machine Learning / AI
Datasets drive Machine Learning, but who is behind the wheel? Data is fueling AI progress in machine learning, yet the design, science, and engineering of these datasets remains largely underdeveloped. We know little today about the work of dataset creation and how alternative dataset design practices impact progress in the AI field. To accelerate AI progress and minimize risk of harm (e.g., biased datasets yielding biased models), we are qualitatively studying the ecosystem of how today’s AI datasets are proposed, funded, designed, created, and used. The ideal candidate will be comfortable with both qualitative and quantitative methods and have familiarity with machine learning from data. This Postdoctoral Fellow will be primarily mentored by James Howison and Matthew Lease. The Fellow will also be invited to engage with us in Good Systems, a UT Austin Grand Challenge to design AI technologies that benefit society.
UT Austin’s School of Information
On the start date of the position, candidates must hold a doctoral degree. That degree must be in a field relevant to their area of research and will normally have been awarded no more than three years prior to their start date. Exceptions to the three year limit will be considered on a case-by-case basis, particularly for candidates who have taken positions outside academia and wish to return to academic research. Candidates must articulate clearly in their application materials what independent project they propose to work on during their fellowship in addition to their time working on iSchool faculty projects.