Behavioral Use Licensing for Responsible AI

by   Danish Contractor, et al.

Scientific research and development relies on the sharing of ideas and artifacts. With the growing reliance on artificial intelligence (AI) for many different applications, the sharing of code, data, and models is important to ensure the ability to replicate methods and the democratization of scientific knowledge. Many high-profile journals and conferences expect code to be submitted and released with papers. Furthermore, developers often want to release code and models to encourage development of technology that leverages their frameworks and services. However, AI algorithms are becoming increasingly powerful and generalized. Ultimately, the context in which an algorithm is applied can be far removed from that which the developers had intended. A number of organizations have expressed concerns about inappropriate or irresponsible use of AI and have proposed AI ethical guidelines and responsible AI initiatives. While such guidelines are useful and help shape policy, they are not easily enforceable. Governments have taken note of the risks associated with certain types of AI applications and have passed legislation. While these are enforceable, they require prolonged scientific and political deliberation. In this paper we advocate the use of licensing to enable legally enforceable behavioral use conditions on software and data. We argue that licenses serve as a useful tool for enforcement in situations where it is difficult or time-consuming to legislate AI usage. Furthermore, by using such licenses, AI developers provide a signal to the AI community, as well as governmental bodies, that they are taking responsibility for their technologies and are encouraging responsible use by downstream users.


page 1

page 2

page 3

page 4


Towards organizational guidelines for the responsible use of AI

In the past few years, several large companies have published ethical pr...

AI Usage Cards: Responsibly Reporting AI-generated Content

Given AI systems like ChatGPT can generate content that is indistinguish...

An Ethical Framework for Guiding the Development of Affectively-Aware Artificial Intelligence

The recent rapid advancements in artificial intelligence research and de...

A Method for Generating Dynamic Responsible AI Guidelines for Collaborative Action

To improve the development of responsible AI systems, developers are inc...

A Rapid Review of Responsible AI frameworks: How to guide the development of ethical AI

In the last years, the raise of Artificial Intelligence (AI), and its pe...

Evaluation of software impact designed for biomedical research: Are we measuring what's meaningful?

Software is vital for the advancement of biology and medicine. Analysis ...

Dislocated Accountabilities in the AI Supply Chain: Modularity and Developers' Notions of Responsibility

Responsible AI guidelines often ask engineers to consider how their syst...

Please sign up or login with your details

Forgot password? Click here to reset