The lawful, ethical, and responsible use of artificial intelligence (AI) is a key priority for Anthology; therefore, we have developed and implemented a Trustworthy AI program. You can find information on our program and general approach to Trustworthy AI in our Trust Center.

As part of our Trustworthy AI principles, we commit to transparency, explainability, and accountability. This page is intended to provide the necessary transparency and explainability to help our clients with their implementation of the AI Design Assistant. We recommend that administrators carefully review this page and ensure that instructors are aware of the considerations and recommendations below before you activate the AI Design Assistant’s functionalities for your institution.

How to contact us:

  • For questions or feedback on our general approach to Trustworthy AI or how we can make this page more helpful for our clients, please email us at [email protected].
  • For questions or feedback about the functionality or output of the AI Design Assistant, please submit a client support ticket.

Topics on this page include:


AI-facilitated functionalities

The AI Design Assistant aids instructors with the creation and design of new courses. It is intended to inspire instructors and make course creation more efficientAnthology has partnered with Microsoft to provide this functionality, not least because Microsoft has a long-standing commitment to the ethical use of AI.

The AI Design Assistant provides the following generative AI-facilitated functionalities in Learn Ultra:

  • Generate keywords for the royalty-free image service in Learn powered by Unsplash – Suggests keywords to the Unsplash search for efficiency.
  • Generate learning modules – Assists instructors by suggesting a course structure.
  • Generate learning module images – Creates and suggests images for each learning module.
  • Generate authentic assignments – Provides suggestions for assignments using your course context
  • Generate test questions and question banks – Inspires instructors by suggesting a range of questions in a test or building a question bank.
  • Generate discussions and journals – Provides instructors with prompts to encourage class interaction 
  • Context picker – Uses the course context you choose to generate content for many of our AI features
  • Language selector - Selects output language from among any of the languages supported by Learn
  • Generate a rubric – Suggests a grading rubric with structure and criteria against a given assessment, which creates instructor efficiency and provides grading transparency to students.
  • Generate an AI conversation - Creates a conversation between a student and an AI persona in a Socratic questioning exercise or role play scenario
  • Generate Document images – Generates images to use within a Document, making Documents more visually appealing to students.

There are ten levels of complexity for AI-generated content.

  1. Early primary school 
  2. Late primary school 
  3. Early middle school 
  4. Late middle school 
  5. Early high school 
  6. Late high school
  7. Undergraduate lower division
  8. Undergraduate upper division 
  9. Graduate level 
  10. Advanced PhD level 

Visit our AI Design Assistant page for instructors to learn more about all its features.

These functionalities are subject to the limitations and availability of the Azure OpenAI Service and subject to change. Please check the relevant release notes for details.


Key facts

Questions and Answers about the AI Design assistant
QuestionAnswer
What functionalities use AI systems?All AI Design Assistant functionalities described above (keyword generation for Unsplash, generation of learning modules, test questions and question banks, grading rubrics, images for Learn documents).
Is this a third-party supported AI system?Yes – The AI Design Assistant is powered by Microsoft’s Azure OpenAI Service.
How does the AI system work?

The AI Design Assistant leverages Microsoft’s Azure OpenAI Service to auto-generate outputs. This is achieved by using limited course information (e.g., course title, course description) and prompting the Azure OpenAI Service accordingly via the Azure OpenAI Service API. Instructors can include additional prompt context for more tailored output generation. The Azure OpenAI Service generates the output based on the prompt and the content is surfaced in the Learn user interface.

For an explanation on how the Azure OpenAI Service and the underlying OpenAI GPT large language models work in detail, please refer to the Introduction section of Microsoft’s Transparency Note and the links provided within it.

Where is the AI system hosted?Anthology currently uses multiple global Azure OpenAI Service instances. The primary instance is hosted in the United States but at times we may utilize resources in other locations such as Canada, the United Kingdom or France to provide the best availability option for the Azure OpenAI Service for our clients.

All client course data used for the input and all output generated by the AI Design Assistant is stored in the client’s existing Learn database by Anthology.
Is this an opt-in functionality?Yes. Administrators need to activate the AI Design Assistant in the Learn admin console. Settings for the AI Design Assistant are in the Building Blocks category. Select AI Design Assistant and Unsplash. Administrators can activate or deactivate each functionality separately. Administrators also need to assign AI Design Assistant privileges to course roles as necessary, such as the Instructor role. The privileges that need to be assigned are ‘Search for images using Unsplash’ and ‘Use AI Design Assistant.'
How is the AI system trained?

Anthology is not involved in the training of the large language models that power the AI Design Assistant functionalities. These models are trained by Microsoft as part of the Azure OpenAI Service that power the AI Design Assistant functionalities. Microsoft provides information about how the large language models are trained in the Introduction section of Microsoft’s Transparency Note and the links provided within it.

Anthology does not further fine-tune the Azure OpenAI Service using our own or our clients’ data.

Is client data used for (re)training the AI system?No. Microsoft contractually commits in its Azure OpenAI terms with Anthology to not use any input into, or output of, the Azure OpenAI for the (re)training of the large language model. The same commitment is made in the Microsoft documentation on Data, privacy, and security for Azure OpenAI Service
How does Anthology use personal information with regard to the provision of the AI system?Anthology only uses the information collected in connection with the AI Design Assistant to provide, maintain and support the AI Design Assistant and where we have the contractual permission to do so in accordance with applicable law. You can find more information about Anthology’s approach to data privacy in our Trust Center.
In the case of a third-party supported AI system, how will the third party use personal information?

Only limited course information is provided to Microsoft for the Azure OpenAI Service. This should generally not include personal information (except in cases where personal information is included in course titles, descriptions and similar course information). Additionally, any information the instructors choose to include in the prompt will be accessible. 

Microsoft does not use any Anthology data nor Anthology client data it has access to (as part of the Azure OpenAI Service) to improve the OpenAI models, to improve its own or third-party products services, nor to automatically improve the Azure OpenAI models for Anthology’s use in Anthology’s resource (the models are stateless). Microsoft reviews prompts and output for its content filtering to prevent abuse and harmful content generation. Prompts and output are only stored for up to 30 days.

 

You can find more information about the data privacy practices regarding the Azure OpenAI Service in the Microsoft documentation on Data, privacy, and security for Azure OpenAI Service.

Was accessibility considered in the design of the AI System?Yes, our accessibility engineers collaborated with product teams to review designs, communicate important accessibility considerations, and to test the new features specifically for accessibility. We will continue to consider accessibility as an integral part of our Trustworthy AI approach.

·


Considerations and recommendations for institutions

Intended use cases

The AI Design Assistant is only intended to support the functionalities listed above (keyword generation for Unsplash, generation of learning modules, test questions and question banks, grading rubrics, images for Learn documents). These functionalities are provided to and intended for our clients’ instructors to support them with the creation and design of courses within Learn.

Out of scope use cases

Because Learn AI Design Assistant is powered by Microsoft’s Azure OpenAI Service which has a very broad range of uses cases, it may be possible to use the prompt functionality in the AI Design Assistant to request output beyond the intended functionalities. We strongly discourage clients from using AI Design Assistant for any purpose beyond the scope of its intended functionalities. Doing so may result in the generation of outputs that are not suitable for or compatible with the Learn environment and the measures we have put in place to minimize inaccurate output.

In particular, the points below should be followed when prompting:

  • Only use prompts that are intended to solicit more relevant output from the AI Design Assistant (e.g., provide more details on the intended course structure)
  • Do not use prompts to solicit output beyond the intended functionality. For instance, you should not use the prompt to request sources or references for the output. In our testing, we determined that there are accuracy issues with such output.
  • Be mindful that prompts requesting output in the style of a specific person or requesting output that looks similar to copyrighted or trademarked items could result in output that carries the risk of intellectual property right infringement.
  • Suggested output for sensitive topics may be limited. Azure OpenAI Service has been trained and implemented in a manner to minimize illegal and harmful content. This includes a content filtering functionality. This could result in limited output or error messages when the AI Design Assistant is used for courses related to sensitive topics (e.g., self-harm, violence, hate, sex).
  • Do not use prompts that violate the terms of your institution’s agreement with Anthology or that violate Microsoft’s Code of Conduct for Azure OpenAI Service and Acceptable Use Policy in the Microsoft Online Services Terms.

Trustworthy AI principles in practice

Anthology and Microsoft believe the lawful, ethical and responsible use of AI is a key priority. This section explains how Anthology and Microsoft have worked to address the applicable risk to the legal, ethical and responsible use of AI and implement the Anthology Trustworthy AI principles. It also suggests steps our clients can consider when undertaking their own AI and legal reviews ethical AI risks of their implementation.

Transparency and Explainability

  • We make it clear in the Learn administrator console that this is an AI-facilitated functionality
  • In the user interface for instructors, the AI Design Assistant functionalities are clearly marked as ‘Generate’ functionalities. Instructors are also requested to review the text output prior to use. · The metadata of the output created by the AI Design Assistant functionalities has a field for auto-generated content and whether the content was subsequently edited by the instructor.
  • In addition to the information provided in this document on how the AI Design Assistant and the Azure OpenAI Service models work, Microsoft provides additional information about the Azure OpenAI Service in its Transparency Note.
  • We encourage clients to be transparent about the use of AI within the AI Design Assistant and provide their instructors and other stakeholders as appropriate with the relevant information from this document and the documentation linked herein.

Reliability and accuracy

  • We make it clear in the Learn administrator console that this is an AI-facilitated functionality that may produce inaccurate or undesired output and that such output should always be reviewed.
  • In the user interface, instructors are requested to review the text output for accuracy.
  • As detailed in the Limitations section of the Azure OpenAI Service Transparency Note, there is a risk of inaccurate output (including ‘hallucinations’). While the specific nature of the AI Design Assistant and our implementation is intended to minimize inaccuracy, it is our client’s responsibility to review output for accuracy, bias and other potential issues.
  • As mentioned above, clients should not use the prompt to solicit output beyond the intended use cases, particularly as this could result in inaccurate output (e.g., where references or sources are requested).
  • As part of their communication regarding the AI Design Assistant, clients should make their instructors aware of this potential limitation.
  • Instructors can use the additional prompts and settings in the generative workflows to provide more context to the AI Design Assistant to improve alignment and accuracy.
  • Instructors can use existing workflows in Learn to manually edit the AI Design Assistant outputs before publishing the output to students.
  • Clients can report any inaccurate output to us using the channels listed in the introduction.

Fairness

  • Large language models inherently present risks relating to stereotyping, over/under-representation and other forms of harmful bias. Microsoft describes these risks in its Limitations section of the Azure OpenAI Service Transparency Note.
  • Given these risks, we have carefully chosen the AI Design Assistant functionalities to avoid use cases that may be more prone to harmful bias or where the impact of such bias could be more significant.
  • Nonetheless, it cannot be excluded that some of the output may be impacted by harmful bias. As mentioned above under ‘Accuracy’, instructors are requested to review output, which can help to reduce any harmful bias.
  • As part of their communication regarding the AI Design Assistant, clients should make their instructors aware of this potential limitation.
  • Clients can report any potentially harmful bias to us using the contact channels listed in the introduction.

Privacy and Security

  • As described in the ‘Key facts’ section above, only limited personal information is used for the AI Design Assistant and accessible to Microsoft. The section also describes our and Microsoft’s commitment regarding the use of any personal information. Given the nature of the AI Design Assistant, personal information in the generated output is also expected to be limited.
  • Our Learn SaaS product is ISO 27001/27017/27018 certified and is currently working towards certification against ISO 27701. These certifications will include the AI Design Assistant-related personal information managed by Anthology. You can find more information about Anthology’s approach to data privacy and security in our Trust Center.
  • Microsoft describes its data privacy and security practices and commitments in the documentation on Data, privacy, and security for Azure OpenAI Service.
  • Regardless of Anthology’s and Microsoft’s commitment regarding data privacy and not using input to (re)train the models, clients may want to advise their instructors not to include any personal information or other confidential information in the prompts.

Safety

  • Large language models inherently present a risk of outputs that may be inappropriate, offensive or otherwise unsafe. Microsoft describes these risks in its Limitations section of the Azure OpenAI Service Transparency Note.
  • Given these risks, we have carefully chosen the AI Design Assistant functionalities to avoid use cases that may be more prone to unsafe outputs or where the impact of such output could be more significant.
  • Nonetheless, it cannot be excluded that some of the output may be unsafe. As mentioned above under ‘Accuracy’, instructors are requested to review output, which can further help reduce the risk of unsafe output.
  • As part of their communication regarding the AI Design Assistant, clients should make their instructors aware of this potential limitation.
  • Clients should report any potentially unsafe output to us using the channels listed in the introduction.

Humans in control

  • To minimize the risk related to the use of generative AI for our clients and their users, we intentionally put clients in control of the AI Design Assistant’s functionalities. The AI Design Assistant is therefore an opt-in feature. Administrators must activate the AI Design Assistant and can then activate each functionality separately. They can also deactivate the AI Design Assistant overall or each of the individual functionalities.
  • Additionally, instructors are in control of the output. They are requested to review text output and can edit the text output.
  • The AI Design Assistant does not include any automated decision-making that could have a legal or otherwise significant effect on learners or other individuals.
  • We encourage clients to carefully review this document including the information links provided herein to ensure they understand the capabilities and limitations of the AI Design Assistant and the underlying Azure OpenAI Service before they activate the AI Design Assistant in the production environment.

Value alignment

  • Large language models inherently have risks regarding output that is biased, inappropriate or otherwise not aligned with Anthology’s values or the values or our clients and learners. Microsoft describes these risks in its Limitations section of the Azure OpenAI Service Transparency Note.
  • Additionally, large language models (like every technology that serves broad purposes), present the risk that they can generally be misused for use cases that do not align with the values of Anthology, our clients or their end users and those of society more broadly (e.g., for criminal activities, to create harmful or otherwise inappropriate output).
  • Given these risks, we have carefully designed and implemented our AI Design Assistant functionalities in a manner to minimize the risk of misaligned output. For instance, we have focused on functionalities for instructors rather than for learners. We have also intentionally omitted potentially high-stakes functionalities. ·
  • Microsoft also reviews prompts and output as part of its content filtering functionality to prevent abuse and harmful content generation.

Intellectual property

  • Large language models inherently present risks relating to potential infringement of intellectual property rights. Most intellectual property laws around the globe have not fully anticipated nor adapted to the emergence of large language models and the complexity of the issues that arise through their use. As a result, there is currently no clear legal framework or guidance that addresses the intellectual property issues and risks that arise from use of these models.
  • Ultimately, it is our client’s responsibility to review output generated by that AI Design Assistant for any potential intellectual property right infringement. Be mindful that prompts requesting output in the style of a specific person or requesting output that looks similar to copyrighted or trademarked items could result in output that carries a heightened risk of infringements.

Accessibility

We designed and developed the AI Design System with accessibility in mind as we do throughout Learn and our other products. Before the release of the AI Design System, we purposefully improved the accessibility of the semantic structure, navigation, keyboard controls, labels, custom components, and image workflows, to name a few areas. We will continue to prioritize accessibility as we leverage AI in the future.

Accountability

  • Anthology has a Trustworthy AI program designed to ensure the legal, ethical, and responsible use of AI. Clear internal accountability and the systematic ethical AI review or functionalities such as those provided by the AI Design Assistant are key pillars of the program.
  • To deliver the AI Design Assistant, we partnered with Microsoft to leverage the Azure OpenAI Service which powers the AI Design Assistant. Microsoft had a long-standing commitment to the ethical use of AI.
  • Clients should consider implementing internal policies, procedures and review of third-party AI applications to ensure their own legal, ethical, and responsible use of AI. This information is provided to support our clients’ review of the AI Design Assistant.

Further information


Supported output languages for AI workflows

The AI Design Assistant can produce outputs in many languages:

  • Arabic 
  • Azerbaijani 
  • Catalan
  • Chinese, Simplified 
  • Chinese, Traditional 
  • Croatian
  • Czech
  • Danish
  • Dutch
  • English, American
  • English, Australian 
  • English, British 
  • French 
  • French, Canadian 
  • German 
  • Greek 
  • Hebrew 
  • Italian 
  • Irish
  • Japanese 
  • Korean 
  • Malay 
  • Norwegian, Bokmål 
  • Norwegian, Nynorsk 
  • Polish 
  • Portuguese, Brazilian 
  • Portuguese, Portuguese 
  • Russian 
  • Slovenian
  • Spanish
  • Swedish
  • Thai 
  • Turkish 
  • Ukrainian 
  • Welsh