The research community is best poised to shape what is developed and in what capacity it can be used by a variety of actors.

Ryan Khurana is the Founder and Executive Director of the Institute for Advancing Prosperity. He is a Senior Fellow for IREF Europe and a Research Fellow for the Consumer Choice Center. He formerly worked as a technology policy analyst at the Competitive Enterprise Institute in Washington, D.C., and as a Research Associate at the Institute of Economic Affairs in London, United Kingdom. His work has been featured in The Telegraph, National Review, The Washington Examiner, The Federalist, and many others.

On Valentine’s Day 2019, OpenAI, a leading artificial intelligence research non-​profit, released the results of their latest work, GPT-2, which promised a significant advance in AI language generation. The AI model, built off an approach called a “Transformer”—that Google pioneered only a few years earlier—was able to generate coherent, essay-​length responses to prompts. One of these responses, in which the model generated a fake news story about Miley Cyrus shoplifting, revealed a disconcerting application given the fraught political climate. Fearing that their technology would be used to spread fake news, OpenAI stated that they would not release the data set or the trained model.

Their decision led to mockery from many in the AI research community for its flouting of research norms, with some claiming that withholding the research was a means to generate hype in the media. And hype there was, with prophecies of “AI doom” from the mainstream press, which lambasted the technology for its threat to democracy. However, neither the dismissive nor the sensationalist take truly capture the importance of OpenAI’s decision. Given that policy makers move too slow to adequately regulate new technologies, the responsibility for these kinds of ethical decisions must fall to researchers themselves.

While impressive, GPT-2 is not a radical departure from the normal and expected trends in the subfield of AI called natural language processing (NLP). Already, systems from Alibaba and Stanford have beaten prior benchmarks for GLUE, one of the standard bearers for NLP, since GPT-2’s release. Its innovation arose mainly from the size and diversity of the data set the AI was trained on: a 45 million web page data set called WebText pulled from links on Reddit. This size and diversity allowed the trained model to perform well in a variety of contexts, such as reading comprehension, translation, and text summarization. Most prior developments, at least in the English language, have been developed for specialized tasks.

Restricting access to the data set and trained model, however, will not prevent a similar advance from being developed independently because this is a normal development. It will slightly raise the cost, as the work is resource intensive both in terms of time and computation, which is a mild but not insuperable barrier.

Advances in AI’s language-​generation capabilities have unbelievable promise. They enable higher-​quality applications, such as the development of translation tools, digital assistants, and even news editors. Furthermore, these applications, working in concert with human programmers, can break down barriers in communication and provide opportunities for those who are visually impaired.

There is, however, the valid concern over “dual use,” a term that describes commercial AI technologies’ ability to be repurposed for malicious ends. Language generation has the capacity to impersonate others online and speed the spread of fake news. Those and other dual uses give cause for those working in AI to be selective about how and who has access to their work. Restrictions in the short-​term force researchers to confront the implications of their work. This would help the AI community to reduce risks of malicious applications, accidents, or structural problems.

As it stands, it is not the norm among AI researchers to reject works of exceptional scientific and technological quality based on their likely social impact. There is a strong commitment among the AI community to prevent the development of Lethal Autonomous Weapons Systems (LAWS), but little is in place to prevent pure research which has no immediate consumer or military applications. For example, many papers published in prestigious AI journals have few applications outside of surveillance and researchers concerned with beating benchmarks may not always be aware that their papers may be used in this way.

While AI ethics is certainly a large field, it often exists in a distinct space from core AI research itself. AI ethics is often plagued by narrow concerns or poor definitions that prevents it from producing consensus or impacting AI’s development. Corporate pushes to discipline deployment have been criticized as “ethics-​washing”, exemplified by Google’s recent decision to terminate its AI ethics board only a week after it was announced. Resolving these tensions moving responsibility up to the level of basic research.

A field like artificial intelligence, in which the scientific work may eventually be deployed for various real world applications, requires researchers willing to grapple with the potential effects of their work. Other fields, such as biotechnology and cybersecurity, have for years had to grapple with societal implications, and there is no reason that AI researchers should leave it to others to make those decisions.

Researchers acting in concert with those who understand the social implications of their work is exactly what the science needs to assuage the public’s concerns about AI. Publications could take a note from social science researchers, who often include discussion sections at the end of their papers that deal with its implications. As Norbert Wiener, one of the founding fathers of cybernetics, said, “Our understanding of our man-​made machines should in general develop pari passu with the performance of the machine.”

The rising complexity of new technologies and the accelerating speed at which they develop means that policy conversations must also occur outside of media or politics. Journalists are incentivized to sensationalize coverage of artificial intelligence and policymakers too technically illiterate to comprehend it. It is nonetheless vital for those developing norms and policies around AI to keep in mind the social implications of the decisions they make. In the end, the research community is best poised to shape what is developed and in what capacity it can be used by actors ranging from corporations to governments to private individuals. There is a hopeful sign in that need; the very fact that this social obligation is being integrated into AI research hints at the development of a technology that will powerfully serve human needs.