AI Conversation — Transparency Note
The lawful, ethical, and responsible use of artificial intelligence (AI) is a key priority for Anthology. Therefore, we have developed and implemented a Trustworthy AI program. You can find information on our program and general approach to Trustworthy AI in our Trust Center. You can find an overview of Anthology solutions with generative AI in our List of generative AI features.
As part of our Trustworthy AI principles, we commit to transparency, explainability, and accountability. This page is intended to provide the necessary transparency and explainability to help our clients with their implementation of AI Conversation. We recommend that administrators carefully review this page and ensure that instructors are aware of the considerations and recommendations below before you activate any of AI Conversation’s functionalities for your institution.
How to contact us:
- For questions or feedback on our general approach to Trustworthy AI or how we can make this page more helpful for our clients, please email us at [email protected]
- For questions or feedback about the functionality or output of AI Conversation, please submit a client support ticket.
Last updated: November 1st, 2024
AI-facilitated functionalities
AI Conversation
AI Conversation is specifically designed as a new interactive activity in which students can actively participate. Instructors can create an AI Conversation in their courses, outlining a topic and an AI persona, and will be able to select the type of conversation for the students to engage in. Within the AI Conversation functionality, instructors can choose between two options: Socratic Questioning, in which the AI persona encourages students to think critically through continuous questioning, or Role Play, which allows students to play out a scenario with the AI persona.
Socratic Questioning
This is a guided questioning activity or Socratic exercise. The AI persona will not confirm or reject any student response but moves students through a series of questions. At the end of the conversation, students provide a reflection on the activity, highlighting weaknesses or strengths in their learning, or if the AI bot showed bias, hallucinations, or inaccuracies. On submission, the instructor will receive a transcript of the conversation and reflection, giving full transparency of the interactions. This is a great way to have a thought-provoking dialogue on the course topics without having individual 1:1's, which for larger or more complex courses, can be difficult to do.
AI Conversation provides instructors with the ability to generate an image for the AI Conversation persona using generative AI.
These functionalities are subject to the limitations and availability of the Azure OpenAI Service and are subject to change. Please check the relevant release notes for details.
Role Play
The Role Play feature lets instructors set up simulated conversations for their students by defining specific roles for both the AI persona and students. This interactive option enhances learning and training experiences by allowing students to practice communication skills in realistic scenarios, providing active learning opportunities. With customizable personality traits for the AI persona and contextual prompts, the Role Play feature fosters engaging and dynamic exchanges, enriching the overall learning process and encouraging critical thinking.
Instructors can customize the AI persona by assigning it a name and image. They also define the AI persona's personality traits and select the complexity of its responses. The personality traits assigned to the AI persona in this Role Play option shape its responses and interactions.
Please note: Instructors should select the personality traits carefully and preview the simulated conversation, as the personality traits significantly influence the tone and content of the conversation. For example, if the instructor sets up the AI persona to be warm and empathetic, the AI persona will respond with these traits. If an instructor sets up the AI persona to be controversial or biased, the AI persona’s output will likely be controversial or biased. The AI persona will also not always challenge controversial, biased, or dangerous ideas from students.
As part of our testing of this functionality, Anthology reviewed and discussed these outputs to determine if the functionality should be limited to avoid any bias or inappropriate output. We concluded that, on balance, institutions and instructors should have the academic freedom to let students engage in simulated conversations that may be controversial or biased. At the same time, we understand that there are limits to the output that the AI persona should be able to produce. Accordingly, Instructors are ultimately responsible for the output of AI Conversation and the dialogue students will encounter through the Role Play functionality. In our testing, the existing guardrails implemented by OpenAI, Microsoft, and Anthology prevented any output that was illegal or otherwise did not meet our Trustworthy AI standards. We will continue to monitor this feature and any related customer feedback to ensure we can make changes that may be necessary for this feature to meet our Trustworthy AI standards.
The AI Conversation functionalities are subject to the limitations and availability of the Azure OpenAI Service and are subject to change. Please check the relevant release notes for details.
Key Facts
Question | Answer |
---|---|
What functionalities use AI systems? | AI-generated images for the persona (as described above) AI Conversation functionalities (as described above). |
Is this a third-party supported AI system? | Yes – AI Conversation and AI-generated images are powered by Microsoft’s Azure OpenAI Service. |
How does the AI system work? | AI Conversation leverages Microsoft’s Azure OpenAI Service to auto-generate outputs. This is achieved by using information provided by the instructor within the Socratic or Role Play option itself such as topic, AI persona, personality traits, and complexity, along with our prompt to facilitate the responses. For an explanation of how the Azure OpenAI Service and the underlying OpenAI GPT large language models work in detail, please refer to the Introduction section of Microsoft’s Transparency Note and the links provided within it. |
Where is the AI system hosted? | Anthology currently uses multiple global Azure OpenAI Service instances. The primary instance is hosted in the United States but at times we may utilize resources in other locations such as Canada, the United Kingdom, or France to provide the best availability option for the Azure OpenAI Service for our clients. All client course data and instructor input used for the input and all output generated by AI Conversation is stored in the client’s existing Blackboard database by Anthology. |
Is this an opt-in functionality? | Yes. Administrators need to activate AI Conversation in the Blackboard admin console. Settings for AI Conversation are in the Building Blocks category. Select AI Conversation and Unsplash. Administrators can activate or deactivate each functionality separately. Administrators also need to assign the "Use AI Design Assistant" privileges to course roles as necessary, such as the Instructor role. |
How is the AI system trained? | Anthology is not involved in the training of the large language models that power the AI Conversation functionalities. These models are trained by OpenAI / Microsoft as part of the Azure OpenAI Service that power the AI Conversation functionalities. Microsoft provides information about how the large language models are trained in the Introduction section of Microsoft’s Transparency Note and the links provided within it. Anthology does not further fine-tune the Azure OpenAI Service using our own or our clients’ data. |
Is client data used for (re)training the AI system? | No. Microsoft contractually commits in its Azure OpenAI terms with Anthology to not use any input into, or output of, the Azure OpenAI for the (re)training of the large language model. The same commitment is made in the Microsoft documentation on Data, privacy, and security for Azure OpenAI Service. |
How does Anthology use personal information with regard to the provision of the AI system? | Anthology only uses the information collected in connection with AI Conversation to provide, maintain, and support AI Conversation and where we have the contractual permission to do so in accordance with applicable law. You can find more information about Anthology’s approach to data privacy in our Trust Center. |
In the case of a third-party supported AI system, how will the third party use personal information? | Only limited course information is provided to Microsoft for the Azure OpenAI Service. This should generally not include personal information (except in cases where personal information is included in the topic, AI Persona, or Personality fields, or the student’s questions and responses to the AI bot). Additionally, any information the instructors choose to include in the prompt will be accessible. Microsoft does not use any Anthology data nor Anthology client data it has access to (as part of the Azure OpenAI Service) to improve the OpenAI models, to improve its own or third-party products services, or to automatically improve the Azure OpenAI models for Anthology’s use in Anthology’s resource. The models are stateless. Microsoft reviews prompts and output for its content filtering. Prompts and output are only stored for up to 30 days. You can find more information about the data privacy practices regarding the Azure OpenAI Service in the Microsoft documentation on Data, privacy, and security for Azure OpenAI Service. |
Was accessibility considered in the design of the AI System? | Yes, our accessibility engineers collaborated with product teams to review designs, communicate important accessibility considerations, and test the new features specifically for accessibility. We will continue to consider accessibility as an integral part of our Trustworthy AI approach. |
Considerations and recommendations for institutions
Intended use cases
AI Conversation is only intended to support the functionalities listed above. These functionalities are provided to and intended for our client’s instructors and students, with the aim of enhancing students' learning through AI-supported activities.
Out-of-scope use cases
Because AI Conversation is powered by Microsoft, which has a very broad range of use cases, it may be possible to use the prompt functionality in AI Conversation to request output beyond the intended functionalities. We strongly discourage clients from using AI Conversation for any purpose beyond the scope of its intended functionalities. Doing so may result in the generation of outputs that are not suitable for or compatible with the Blackboard environment and the measures we have put in place to minimize inaccurate output.
In particular, the points below should be followed when prompting:
- Only use prompts that are intended to pursue the conversation regarding the assigned topic for AI Conversation. For example, respond to the questions and prompts of the AI bot or ask the AI bot questions regarding the assigned topic.
- Do not use prompts to solicit output beyond the intended functionality. For instance, you should not use the prompt to request sources or references for the output. In our testing, we determined that there are accuracy issues with such output.
- Be mindful that prompts requesting output in the style of a specific person or requesting output that looks similar to copyrighted or trademarked items could result in output that carries the risk of intellectual property right infringement.
- Suggested output for sensitive topics may be limited. Azure OpenAI Service has been trained and implemented in a manner to minimize illegal and harmful content. This includes a content filtering functionality. This could result in limited output or error messages when AI Conversation is used for courses related to sensitive topics (for example, self-harm, violence, hate, or sex). Do not use prompts that violate the terms of your institution’s agreement with Anthology or that violate Microsoft’s Code of Conduct for Azure OpenAI Service and Acceptable Use Policy in the Microsoft Online Services Terms.
For Role Play: Instructors should select the personality traits carefully and preview the simulated conversation, as the personality traits significantly influence the tone and content of the conversation. Please see the function description above for more details.
Trustworthy AI principles in practice
Anthology believes the lawful, ethical and responsible use of AI is a key priority. This section explains how Anthology and Microsoft have worked to address the applicable risk to the legal, ethical, and responsible use of AI and implement the Anthology Trustworthy AI principles. It also suggests steps our clients can consider when undertaking their own reviews of ethical AI risks.
Transparency and Explainability
- We make it clear in the Blackboard administrator console that AI Conversation is an AI-facilitated functionality.
- In the user interface for instructors, the AI Conversation functionalities are clearly marked as AI functionalities. Instructors are given the ability to preview the conversation and try it out before making it available to students.
- In addition to the information provided in this document on how AI Conversation and the Azure OpenAI Service models work, Microsoft provides additional information about the Azure OpenAI Service in its Transparency Note.
- We encourage clients to be transparent about the use of AI within the AI Conversation and provide their instructors, students, and other stakeholders as appropriate with the relevant information from this document and the documentation linked herein.
Reliability and accuracy
- We make it clear in the Blackboard administrator console that AI Conversation is an AI-facilitated functionality that may produce inaccurate or undesired output and that such output should always be reviewed.
- In the user interface, instructor's preview, and students using AI Conversation are informed that responses are generated by AI and therefore may have bias or be inaccurate.
- Role Play: Instructors should be aware that the AI persona’s personality traits shape its responses and interactions with students and may impact the reliability and accuracy of the output (including increased risk of hallucinations). Instructors should select personality traits carefully and preview the conversation. See the function description for more details.
- As detailed in the Limitations section of the Azure OpenAI Service Transparency Note, there is a risk of inaccurate output (including hallucinations). While the specific nature of AI Conversation and our implementation is intended to minimize inaccuracy, it is our client’s responsibility to review the output for accuracy, bias, and other potential issues. If concerns are present, AI Conversation does not need to be used with students. This is an optional functionality of the course at the instructor’s discretion.
- As mentioned above, clients should not use the prompt to solicit output beyond the intended use cases, particularly as this could result in inaccurate output (for example, where references or sources are requested).
- As part of their communication regarding AI Conversation, clients should make their instructors and students aware of these potential limitations and risks.
- Instructors can use the additional prompts and settings in the generative workflows to provide more context to AI Conversation to improve alignment and accuracy.
- Clients can report any inaccurate output to us using the channels listed in the introduction to this note.
Fairness
- Large language models inherently present risks relating to stereotyping, over or under-representation, and other forms of harmful bias. Microsoft describes these risks in its Limitations section of the Azure OpenAI Service Transparency Note.
- Given these risks, we have carefully chosen the AI Conversation functionalities to avoid use cases that may be more prone to harmful bias or where the impact of such bias could be more significant.
- Role Play: Instructors should be aware that the AI persona’s personality traits shape its responses and interactions with students and may result in output that may include stereotyping, over or under-representation, and other forms of harmful bias. Instructors should select personality traits carefully and preview the conversation. See function description for more details.
- Nonetheless, it cannot be excluded that some of the output may be impacted by harmful bias. As mentioned above under "Accuracy," instructors are requested to review the activity, which can help to reduce any harmful bias.
- As part of their communication regarding AI Conversation, clients should make their instructors aware of this potential limitation.
- Clients can report any potentially harmful bias to us using the contact channels listed in the introduction to this note.
Privacy and Security
- As described in the "Key facts" section above, only limited personal information is used for AI Conversation and accessible to Microsoft. The section also describes our and Microsoft’s commitment regarding the use of any personal information. Given the nature of AI Conversation, personal information in the generated output is also expected to be limited.
- Our Blackboard SaaS product is ISO 27001/27017/27018 and ISO 27701 certified. These certifications include AI Conversation-related personal information managed by Anthology. You can find more information about Anthology’s approach to data privacy and security in our Trust Center.
- Microsoft describes its data privacy and security practices and commitments in the documentation on Data, privacy, and security for Azure OpenAI Service.
- Regardless of Anthology’s and Microsoft’s commitment regarding data privacy and not using input to retrain the models, clients may want to advise their instructors and students not to include any personal information or other confidential information in the prompts or conversation.
Safety
- Large language models inherently present a risk of outputs that may be inappropriate, offensive, or otherwise unsafe. Microsoft describes these risks in its Limitations section of the Azure OpenAI Service Transparency Note.
- Given these risks, we have carefully chosen the AI Conversation functionalities to avoid use cases that may be more prone to unsafe outputs or where the impact of such output could be more significant.
- Role Play: Instructors should be aware that the AI persona’s personality traits shape its responses and interactions with students and may result in output that may be considered inappropriate, offensive, or otherwise unsafe. Instructors should select personality traits carefully and preview the conversation. See function description for more details.
- Nonetheless, it cannot be excluded that some of the AI Conversation output may be unsafe. As mentioned above under "Accuracy," instructors are requested to review output, which can further help reduce the risk of unsafe output.
- As part of their communication regarding AI Conversation, clients should make their instructors and students aware of this potential limitation.
- Clients should report any potentially unsafe output to us using the channels listed in the introduction to this note.
Humans in control
- To minimize the risk related to the use of generative AI for our clients and their users, we intentionally put clients in control of AI Conversation’s functionalities. AI Conversation is therefore an opt-in feature. Administrators must activate AI Conversation and can then activate each functionality separately. They can also deactivate AI Conversation overall or each of the individual functionalities.
- AI Conversation does not include any automated decision-making that could have a legal or otherwise significant effect on learners or other individuals.
- We encourage clients to carefully review this document including the information links provided herein to ensure they understand the capabilities and limitations of AI Conversation and the underlying Azure OpenAI Service before they activate AI Conversation in the production environment.
Value alignment
- Large language models inherently have risks regarding output that is biased, inappropriate, or otherwise not aligned with Anthology’s values or the values of our clients and learners. Microsoft describes these risks in its Limitations section of the Azure OpenAI Service Transparency Note.
- Additionally, large language models (like every technology that serves broad purposes), present the risk that they can be misused for use cases that do not align with the values of Anthology, our clients or their end users, and those of society more broadly (for example, for criminal activities or to create harmful or otherwise inappropriate output).
- Given these risks, we have carefully designed and implemented our AI Conversation functionalities in a manner to minimize the risk of misaligned output. We have also intentionally omitted potentially high-stakes functionalities. ·
- Microsoft also reviews prompts and output as part of its content filtering functionality to prevent abuse and harmful content generation.
Intellectual property
- Large language models inherently present risks relating to potential infringement of intellectual property rights. Most intellectual property laws around the globe have not fully anticipated nor adapted to the emergence of large language models and the complexity of the issues that arise through their use. As a result, there is currently no clear legal framework or guidance that addresses the intellectual property issues and risks that arise from use of these models.
- Ultimately, it is our client’s responsibility to review output generated by AI Conversation for any potential intellectual property right infringement. Be mindful that prompts requesting output in the style of a specific person or requesting output that looks similar to copyrighted or trademarked items could result in output that carries a heightened risk of infringements.
Accessibility
We designed and developed AI Conversation with accessibility in mind as we do throughout Blackboard and our other products. Before the release of AI Conversation, we purposefully improved the accessibility of the semantic structure, navigation, keyboard controls, labels, custom components, and image workflows, to name a few areas. We will continue to prioritize accessibility as we leverage AI in the future.
Accountability
- Anthology has a Trustworthy AI program designed to ensure the legal, ethical, and responsible use of AI. Clear internal accountability and the systematic ethical AI review or functionalities, such as those provided by AI Conversation, are key pillars of the program.
- To deliver AI Conversation, we partnered with Microsoft to leverage the Azure OpenAI Service which powers AI Conversation. Microsoft has a long-standing commitment to the ethical use of AI.
- Clients should consider implementing internal policies, procedures, and review of third-party AI applications to ensure their own legal, ethical, and responsible use of AI. This information is provided to support our clients’ review of AI Conversation.
Further information
- Anthology’s Trustworthy AI approach
- Anthology’s List of generative AI features
- Microsoft’s Responsible AI page
- Microsoft’s Transparency Note for Azure Vision Service
- Microsoft’s page on Data, privacy, and security for Azure Vision Service