The lawful, ethical, and responsible use of artificial intelligence (AI) is a key priority for Anthology. Therefore, we have developed and implemented a Trustworthy AI program. You can find information on our program and general approach to Trustworthy AI in our Trust Center and more information about generative AI features in our List generative AI features.

As part of our Trustworthy AI principles, we commit to transparency, explainability, and accountability. This page is intended to provide the necessary transparency and explanations to help our clients with their implementation of the AI Alt Text Assistant. We recommend that administrators carefully review this page and ensure that instructors are aware of the considerations and recommendations below before you activate the AI Alt Text Assistant’s functionalities for your institution.

How to contact us:

  • For questions or feedback on our general approach to Trustworthy AI or how we can make this page more helpful for our clients, please email us at [email protected].
  • For questions or feedback about the functionality or output of the AI Alt Text Assistant, please submit a client support ticket via Behind the Blackboard.

AI-facilitated functionalities

The AI Alt Text Assistant provides instructors with suggestions on alternative text for images used within their online course(s) via the Ally Instructor Feedback workflow. It is intended to inspire instructors and make accessibility fixes more efficient.  

Anthology has partnered with AWS to upgrade Ally’s AI model to power even more impactful suggestions, along with the ability to generate image description suggestions for more complex content, including STEM, charts/graphs, text in images, and handwritten images. This feature is powered by Claude, a language model from Anthropic, through AWS Bedrock. If your institution is based in Canada, the assistant uses the newer Claude 3 Sonnet model. In all other regions, it uses Claude Sonnet 3.5. The model used depends on the availability in your AWS region. 

The AI Alt Text Assistant provides the following generative AI-facilitated functionalities:

  • Generate an alternative text for images.  Suggests an alternative text within the Ally Instructor Feedback when instructors are fixing images without alternative text.

These functionalities are subject to the limitations and availability of the AWS Bedrock Service and subject to change. Please check the relevant release notes for details.

Key facts

Example table
QuestionAnswer
What Ally functionalities use AI systems?Alt text suggestion via AI on the Ally Instructor Feedback panel, when an image has been flagged by Ally for not having alternative text.
Is this a third-party supported AI system?Yes. We use Claude, a model developed by Anthropic and offered through AWS Bedrock. Institutions in Canada use Anthropic Claude 3 Sonnet, while other regions use Claude Sonnet 3.5, depending on what's supported in each AWS region.
How does the AI system work?The AI Alt Text Assistant uses Claude models from Anthropic through AWS Bedrock to auto-generate alt text suggestions. Depending on your institution's location, this may be Anthropic Claude 3 Sonnet (in Canada) or Claude Sonnet 3.5 (in other regions). This is achieved by using the image to generate an alternative text that is sent via the AWS SDK.   We have also added the ability to send contextual prompts based on the surrounding text content when an image is embedded in a rich content item. No further information is used (e.g., course title, course description). Images are not used to train the AWS Bedrock models, and the Bedrock generates the output based on each image that is uploaded to the AI Alt Text Assistant.
Where is the AI system hosted?

Anthology currently uses multiple AWS instances for Amazon Bedrock, determined by Amazon region availability. AWS provides this region availability for the Amazon Bedrock model currently used by Ally (Claude Sonnet 3.5) at Model support by AWS Region in Amazon Bedrock. At times we may utilize resources in other locations such as US East to provide the best availability option for AWS Bedrock for our clients when a specific region is unavailable. 

All client output generated by the AI Alt Text Assistant is stored in each client’s existing Ally database managed by Anthology.

Is this an opt-in functionality?Yes. Administrators need to activate the AI Alt Text Assistant in the LMS Ally configuration options. Select AI Generation of alternative descriptions for images under the Features tab. Administrators can activate or deactivate this functionality any time.
How is the AI system trained?

Anthology is not involved in the training of the Anthropic model that powers the AI Alt Text Assistant functionalities through Amazon Bedrock. These models are trained by Anthropic (Anthropic Claude).

Anthology does not further fine-tune these models using our own or our clients’ data.

Is client data used for (re)training the AI system?No. Amazon commits in its public documentation to not use any input into, or output of, for the (re)training of the large language models within Amazon Bedrock. Amazon provides information about how the large language models are trained in the links in the ‘Further information’ section below. Any data input and output is encrypted in transit and at rest.
How does Anthology Ally use personal information with regard to the provision of the AI Al Text Assistant system?Anthology only uses the information collected in connection with AI Alt Text Assistant to provide, maintain, and support the AI Alt Text Assistant, and where we have the contractual permission to do so. You can find more information about Anthology’s approach to data privacy in our Trust Center.
In the case of a third-party supported AI system, how will the third- party use personal information?

Only images and/or contextual data in rich content surrounding an image are used to power this feature. No further information is shared with AWS Bedrock.   

Neither Amazon nor any Amazon Bedrock model provider use any Anthology data or Anthology customer data to improve the Amazon Bedrock models  

You can find more information about the data privacy practices regarding Amazon Bedrock Security and Privacy in the Amazon documentation on the AWS AI Security Page, the AWS Machine Learning Blog, and the Amazon Bedrock Security and Privacy page. 

Was accessibility considered in the design of the  AI Alt Text System?Yes, our accessibility engineers collaborated with product teams to review designs, communicate important accessibility considerations, and to test the new features specifically for accessibility. We will continue to consider accessibility as an integral part of our Trustworthy AI approach. 

Considerations and recommendations for institutions

Intended use cases

The AI Alt Text Assistant is only intended to support the functionalities listed above. These functionalities are provided to and intended for our clients’ instructors to support them with the fix of images’ accessibility issues flagged by Ally.

Out of scope use cases

The AI Alt Text Assistant is not intended nor is it guaranteed to make every image accessible with perfect alternative text as it is still up to the instructor to adjust or approve the alternative text based on the context of the content in the course. 

Trustworthy AI principles in practice

Anthology and Amazon believe the lawful, ethical and responsible use of AI is a key priority. This section explains how Anthology and Amazon have worked to address the applicable risk to the legal, ethical and responsible use of AI and implement the Anthology Trustworthy AI principles. It also suggests steps our clients can consider when undertaking their own AI and legal reviews ethical AI risks of their implementation. 

Transparency and Explainability

  • We make it clear in the Ally administration configuration options that this is an AI-facilitated functionality
  • In the user interface for instructors, the AI Alt Text Assistant functionalities are clearly marked as ‘Generative’ functionalities. Instructors are also requested to review the text output prior to use. · The metadata of the output created by the AI Alt Text Assistant functionalities are exposed in each client’s Ally usage report with a field for auto-generated content. It also shows if the output was subsequently edited by the instructor.
  • The model behind the AI Alt Text Assistant can vary depending on where your institution is located. In Canada, the assistant uses the Claude 3 Sonnet model, which offers newer capabilities. Everywhere else, it uses Claude Sonnet 3.5. This choice is based on what AWS Bedrock makes available in each region.
  • We encourage clients to be transparent about the use of AI within the AI Alt Text Assistant and provide their instructors and other stakeholders as appropriate with the relevant information from this document and the documentation linked herein.

Reliability and accuracy

  • We make it clear in the Ally administration configuration options and terms of use that this is an AI-facilitated functionality that may produce inaccurate or undesired output and that such output should always be reviewed by the instructor.
  • In the user interface, instructors are requested to review the text output for accuracy.
  • As detailed in the Amazon Bedrock FAQs,  there is a risk of inaccurate output. While the specific nature of the AI Alt Text Assistant and our implementation is intended to minimize inaccuracy, it is our client’s responsibility to review output for accuracy, bias, and other potential issues.
  • Users can easily regenerate inaccurate output by rephrasing the question or phrase used in the Q&A prompt.
  • As part of their communication regarding the AI Alt Text Assistant, clients should make their instructors aware of this potential limitation.
  • Clients can report any inaccurate output to us using the channels listed in the introduction.

Fairness

  • Large language models inherently present risks relating to stereotyping, over/under-representation and other forms of harmful bias.
  • Given these risks, we have carefully chosen the AI Alt Text Assistant functionalities to avoid use cases that may be more prone to harmful bias or where the impact of such bias could be more significant.
  • Nonetheless, it cannot be excluded that some of the output may be impacted by harmful bias. As mentioned above under ‘Accuracy’, instructors are requested to review output, which can help to reduce any harmful bias.
  • As part of their communication regarding the AI Alt Text Assistant, clients should make their instructors aware of this potential limitation.
  • Clients can report any potentially harmful bias to us using the contact channels listed in the introduction.

Privacy and Security

  • As described in the ‘Key facts’ section above, only the image is used for the AI Alt Text Assistant and accessible to Microsoft. The section also describes our and Microsoft’s commitment regarding the use of any personal information.
  • Amazon describes its data privacy and security practices and commitments in the documentation on Amazon Bedrock Security and Privacy.

Safety

  • Large language models inherently present a risk of outputs that may be inappropriate, offensive or otherwise unsafe. Amazon describes these risks in its Limitations section of Amazon’s Responsible AI page.
  • Given these risks, we have carefully chosen the AI Alt Text Assistant functionalities to avoid use cases that may be more prone to unsafe outputs or where the impact of such output could be more significant.
  • Nonetheless, it cannot be excluded that some of the output may be unsafe. As mentioned above under ‘Accuracy’, instructors are requested to review output, which can further help reduce the risk of unsafe output.
  • As part of their communication regarding the AI Alt Text Assistant, clients should make their instructors aware of this potential limitation.
  • Clients should report any potentially unsafe output to us using the channels listed in the introduction.

Humans in control

  • To minimize the risk related to the use of generative AI for our clients and their users, we intentionally put clients in control of the AI Alt Text Assistant’s functionalities. The AI Alt Text Assistant is therefore an opt-in feature. Administrators can activate or deactivate the AI Alt Text Assistant at anytime.  
  • Additionally, instructors are in control of the output, meaning they are requested to review the text output and can edit the text output, as needed.
  • The AI Alt Text Assistant does not include any automated decision-making that could have a legal or otherwise significant effect on learners or other individuals.
  • We encourage clients to carefully review this document including the information links provided herein to ensure they understand the capabilities and limitations of the AI Alt Text Assistant and the underlying Amazon Bedrock before they activate the AI Alt Text Assistant feature.

Value alignment

  • Large language models inherently have risks regarding output that is biased, inappropriate or otherwise not aligned with Anthology’s values or the values or our clients and learners. Amazon describes these risks in the linked websites in the ‘further information’ section below.
  • Additionally, large language models (like every technology that serves broad purposes), present the risk that they can generally be misused for use cases that do not align with the values of Anthology, our clients or their end users and those of society more broadly (e.g., for criminal activities, to create harmful or otherwise inappropriate output).
  • Given these risks, we have carefully designed and implemented our AI Alt Text Assistant functionalities in a manner to minimize the risk of misaligned output. For instance, we have focused on functionalities for instructors rather than for learners. We have also intentionally omitted potentially high-stakes functionalities.

Intellectual property

  • Large language models inherently present risks relating to potential infringement of intellectual property rights. Most intellectual property laws around the globe have not fully anticipated nor adapted to the emergence of large language models and the complexity of the issues that arise through their use. As a result, there is currently no clear legal framework or guidance that addresses the intellectual property issues and risks that arise from use of these models.
  • Ultimately, it is our client’s responsibility to review output generated by that AI Alt Text Assistant for any potential intellectual property right infringement.  

Accessibility

  • We designed and developed the AI Alt Text Assistant with accessibility in mind as we do throughout Learn and our other products. Before the release of the AI Alt Text Assistant, we purposefully improved the accessibility of the semantic structure, navigation, keyboard controls, labels, custom components, and image workflows, to name a few areas. We will continue to prioritize accessibility as we leverage AI in the future.
  • The AI Alt Text Assistant feature leverages Amazon Bedrock. AWS makes accessibility conformance reports available via the AWS Accessibility pages.

Accountability

  • Anthology has a Trustworthy AI program designed to ensure the legal, ethical, and responsible use of AI. Clear internal accountability and the systematic ethical AI review of functionalities such as those provided by the AI Alt Text Assistant are key pillars of the program.
  • To deliver the AI Alt Text Assistant, we partnered with Amazon to leverage the Amazon Bedrock. AWS is committed to the responsible use of AI.
  • Clients should consider implementing internal policies, procedures, and review of third-party AI applications to ensure their own legal, ethical, and responsible use of AI. This information is provided to support our clients’ review of the AI Alt Text Assistant.

Further information