Principles for Building Ethical Conversational Assistants
When you create a conversational assistant, you are responsible for its impact on the people that it talks to. Therefore, you should consider how users might perceive the assistant’s statements, and how a conversation might affect their lives. This is not always straightforward, as you typically have little to no knowledge about the background of your users. Thus, we created this guide to help you avoid the worst outcomes.
It is in the best interest of all conversational assistant creators that the public perceives these assistants as helpful and friendly. Beyond this, it is also in the best interest of all members of society (including creators) that conversational assistants are not used for harassment or manipulation. Aside from being unethical, such use cases would create a lasting reluctance of users to engage with conversational assistants.
The following four key points should help you use this technology wisely. Please note, however, that these guidelines are only a first step, and you should use your own judgement as well.
1. A conversational assistant should not cause users harm.
Even though a conversational assistant only exists in the digital world, it can still inflict harm on users simply by communicating with them in a certain way. For example, assistants are often used as information sources or decision guides. If the information that the assistant provides is inaccurate or misleading, users may end up making poor (or even dangerous) decisions based on their interaction with your assistant.
2. A conversational assistant should not encourage or normalize harmful behaviour from users.
Although users have complete freedom in what they can communicate to a conversational assistants, these assistants are designed to only follow pre-defined stories. In doing so, a conversational assistant should not try to provoke the user into engaging in harmful behaviour. If for any reason the user decides to engage in this behaviour anyway, the assistant should politely refuse to participate. In other words, treating such behaviour as normal or acceptable should be avoided. Trying to argue with the user will very rarely lead to useful results.
3. A conversational assistant should always identify itself as one.
When asked questions such as “Are you a bot?” or “Are you a human?”, a bot should always inform the user that it is indeed an assistant, and not a human. Impostor bots (algorithms that pose as humans) are a major piece of platform manipulation techniques, and this creates a lot of mistrust. Instead of misleading users, we should build assistants that truly support them, thereby enabling a larger fraction of work to be done by conversational assistants in the long-term (as users become more accustomed to them). This does not mean that conversational assistants can’t be human-like.
4. A conversational assistant should provide users a way to prove its identity.
When an assistant is designed to communicate with users while representing a company, organization, etc., it is important to allow users to verify that this representation has been previously authorized. It’s possible to use already existing technologies to do this: for example, by integrating a conversational assistant to a website served using HTTPS, the content of the site (and therefore the assistant itself) will be guaranteed to be legitimate by a trusted certificate authority. Another example would be to have the conversational assistant use a “verified” social media account.