Postdoctoral Research Associate, Natural Language Processing

Alan Turing Institute London United Kingdom Research Programmes
Warning! Vacancy expired

Company Description

The Alan Turing Institute is the UK’s national institute for data science and artificial intelligence. The Institute is named in honour of the scientist Alan Turing and its mission is to make great leaps in data science and artificial intelligence research in order to change the world for the better.

Position

The Public Policy research programme works alongside policy makers to explore how data-driven public service provision and policy innovation might help to solve long running societal problems. We also work hand-in-hand with public sector organisations and citizens to develop practice-based ethical standards for the responsible development and use of data science and AI. Our dynamic group has co-produced—with the Office for AI and the Government Digital Service—the UK Government’s official public sector guide for designing and implementing ethical and safe AI. We have also co-authored—with the Information Commissioner’s Office—the first guidance ever released by a UK regulator on explaining AI-assisted decisions.

In addition to our work with government and regulators, we carry out interdisciplinary academic research in the area of AI ethics and governance as well as AI and society. Our research projects rely entirely on public funding, and they include a review of the ethics of machine learning (ML) in children’s social care; an exploration of the relationship of notions of AI, human agency, privacy and trust in intercultural and global contexts; an investigation into how to build grassroots data rights charters through deliberative democracy; an examination of the role of responsible data management in criminal justice applications of AI; and an analysis of the interpretability needs of AI systems in the financial services sector.

The Online Safety Team is a new part of The Turing’s Public Policy Programme. It provides objective, evidence-driven insight into the technical, social, empirical and ethical aspects of online safety, supporting the work of public servants (particularly policymakers and regulators), informing civic discourse and extending academic knowledge. High-priority topics include online hate, personal attacks, extremism, misinformation and disinformation. We have three main workstreams:

  1. Data-centric AI for online safety
  2. Building an Online Harms Observatory
  3. Policymaking for online safety

ROLE PURPOSE

The successful candidate will work as a Postdoctoral researcher in Natural Language Processing (NLP), joining The Turing’s Online Safety Team. They will work in the Data-centric AI for online safety workstream, which aims to create a step-change in the socially responsible use of AI for online safety. They will be responsible for conducting original research into the creation and evaluation of machine learning systems for detecting and categorising harmful online content (e.g. hate, personal attacks, extremism, misinformation). Candidates who are experienced in machine learning and NLP but who have little experience in online safety will be considered provided they can demonstrate interest and willingness to learn about the topic. Key objectives of the project are:

  • Objective 1. To build high performing, robust, fair and explainable tools for detecting content and activity that could inflict harm on people online.
  • Objective 2. To understand the limitations, weaknesses, biases and failings of tools and technologies for detecting content and activity that could inflict harm on people online.
  • Objective 3. To produce datasets and tools that can be used by the research community to understand and increase online safety.

The successful candidate will be responsible for conducting original research and implementing innovative data-driven AI, which could include active learning, adversarial learning, few shot learning, and/or domain adaptation approaches. The successful candidate will drive forward research by co-designing innovative projects, training and evaluating models, and processing data. There is scope for the successful candidate to support with creation of new labelled datasets and other high-impact community resources. The research activities will lead to the publication of high-impact academic papers at NLP and machine learning venues (e.g. ACL, EMNLP and NAACL). Opportunities for non-academic scientific communication (e.g. blogs and public facing talkings) will be provided, if desirable to the successful candidate. The successful candidate will be enabled to explore new areas of NLP research in online safety and to develop models which can be used for real-time analysis of content in The Turing’s Online Harms Observatory.

The successful candidate will report to Dr. Bertie Vidgen. They will collaborate with other NLP and data science experts and the rest of The Online Safety Team and Public Policy Programme, including Dr. Scott Hale and Prof Helen Margetts. The successful candidate will be supported with opportunities for ongoing training and support, including funding to attend courses and conferences. There are no teaching requirements and the successful candidate will be encouraged to publish in top tier venues, with appropriate guidance and supervision. They will also be enabled to form new collaborations and to support the wider NLP community, such as Chairing conferences and joining workshop committees. This is a research-focused role which offers an excellent opportunity for ambitious early career researchers. Opportunities to apply for further funding will be encouraged and supported.


DUTIES AND AREAS OF RESPONSIBILITY

  • Conduct original research to achieve the objectives of the Data-centric AI for Online Safety workstream.
  • Train, evaluate and improve new machine learning models using innovative NLP methods.
  • Identify and evaluate labelled training datasets.
  • Write and submit academic papers to top-tier conferences.
  • Support and lead with the creation of new labelled datasets.
  • Support with the deployment of models in live settings, such as the Online Harms Observatory.
  • Support the work of team members in engaging with non-technical stakeholders..

Requirements

  • PhD-level degree in computer science, statistics, data science, or a related discipline.
  • Experience in training and evaluating NLP models
  • Experience in publishing in top-tier academic venues (e.g. ACL, NAACL, EMNLP)
  • Highly motivated and committed to achieving the project goals
  • Participates in networks within the organisation or externally to share knowledge and information in order develop practice or help others learn.
  • Independently makes decisions which are low risk and that mainly affect themselves or a small number of people and are guided by regulation and practice.
  • Creatively solve problems, working both independently and with other team members
  • Ability to use own judgement to analyse and solve problems.
  • Ability to organise working time, take the initiative, and carry out research independently, under the guidance of the PI.

Please see the job description for a full breakdown of the Duties and Responsibilities and the Person Specification.

Other information

APPLICATION PROCEDURE

If you are interested in this opportunity, please click the apply button below. You will need to register on the applicant portal and complete the application form including your CV and covering letter. If you have questions about the role or would like to apply using a different format, please contact us on 020 3862 3575 or 0203 862 3340, or email [email protected].

If you are applying for more than one role at the Turing, please note that only one Cover Letter can be visible on your profile at one time. If you wish to apply for multiple roles and do not want to overwrite your existing Cover Letter, please apply for the role using the button below and forward your additional cover letter directly to [email protected] quoting the job title.

CLOSING DATE FOR APPLICATIONS: Monday 15 November 2021 at 23:59.

TERMS AND CONDITIONS

This full-time post is offered on a fixed term basis for two years. This role ideally requires the post-holder to be in place by 03 January 2021. The annual salary range is £37,000-£42,000 (depending on experience) plus excellent benefits, including flexible working and family friendly policies, https://www.turing.ac.uk/work-turing/why-work-turing/employee-benefits

Candidates who have not yet been officially awarded their PhD will be appointed as Research Assistant at a salary of £34,500 per annum.

EQUALITY, DIVERSITY AND INCLUSION

The Alan Turing Institute is committed to creating an environment where diversity is valued and everyone is treated fairly. In accordance with the Equality Act, we welcome applications from anyone who meets the specific criteria of the post regardless of age, disability, ethnicity, gender reassignment, marital or civil partnership status, pregnancy and maternity, religion or belief, sex and sexual orientation.

We are committed to building a diverse community and would like our leadership team to reflect this. We therefore welcome applications from the broadest spectrum of backgrounds.

Reasonable adjustments to the interview process will be made for any candidates with a disability.

Please note all offers of employment are subject to obtaining and retaining the right to work in the UK and satisfactory pre-employment security screening which includes a DBS Check.

Full details on the pre-employment screening process can be requested from [email protected].