Illuminate Data Q&A - AI Transparency Note
The lawful, ethical, and responsible use of artificial intelligence (AI) is a key priority for Anthology. Therefore, we've developed and implemented a Trustworthy AI program. You can find information on our program and general approach to Trustworthy AI in our Trust Center and List of Generative AI Features.
As part of our Trustworthy AI principles, we commit to transparency, explainability, and accountability. This page is intended to provide the necessary transparency and explainability to help our clients with their implementation of the Illuminate Data Q&A . We recommend that administrators carefully review this page and ensure that the relevant users are aware of the considerations and recommendations below before activating the Illuminate Data Q&A functionalities for their institution.
How to contact us:
- For questions or feedback on our general approach to Trustworthy AI or how we can make this page more helpful for our clients, please email us at [email protected].
- For questions or feedback about the functionality or output of the Illuminate Data Q&A , please submit a client support ticket via Behind the Blackboard.
AI-facilitated functionalities
The Illuminate Data Q&A feature provides an AI-generated text summary of the data presented in the Data Q&A dashboard which is generated from the user’s prompt. Anthology has partnered with AWS to provide this functionality, in part because AWS is committed to the responsible use of AI.
The Illuminate Data Q&A feature provides the following generative AI-facilitated functionalities:
Generate data summary and suggestions: An AI-generated text summary of the data displayed on the dashboard is automatically created based on the user's prompt. The feature also suggests alternative prompts for refining the data request.
These functionalities are subject to the limitations and availability of Amazon Q / Amazon Bedrock and are subject to change. Check the relevant release notes for details.
Key Facts
Question | Answer |
---|---|
What Illuminate functionalities use AI systems? | The Data Q&A feature listed above. |
Is this a third-party-supported AI system? | Yes. The Illuminate Data Q&A feature is powered by Amazon Q in QuickSight, which in turn leverages Amazon Bedrock. Amazon Q in QuickSight uses Anthropic Claude and Amazon Titan through Amazon Bedrock (as well as a blend of AWS-developed machine learning models). |
How does the AI system work? | The Illuminate Data Q&A has always relied on Amazon QuickSight (a BI tool with natural language processing capabilities). For the new generative AI capabilities, Illuminate additionally relies on Amazon Q for QuickSight. With the help of Amazon Q for QuickSight, the Illuminate Data Q&A automatically creates a summary of the data displayed on the Data Q&A dashboard. The feature also suggests alternative prompts for refining the data request. Amazon Q leverages Anthropic Claude and Amazon Titan within Amazon Bedrock (plus additional AWS machine learning models) to provide these capabilities. To create summaries and suggest prompts, Amazon Q in QuickSight passes user prompts and summary data from the dashboard visuals via an API to the relevant AWS Bedrock model. |
Where is the AI system hosted? | The hosting of Amazon QuickSight and Amazon Bedrock is determined by Amazon. Amazon QuickSight is generally available in the same regions as Illuminate. Amazon Bedrock is currently available in the EU and U.S., so for our EU and U.S. customers, these regions are leveraged respectively. The output of the generative AI model consists of temporary summaries and alternative prompts which are not permanently stored in Illuminate, or Amazon Q in QuickSight / Bedrock. |
Is this an opt-in functionality? | Yes. Administrators need to opt-in on behalf of their institution in the Settings page of Illuminate. Visit the "Anthology Illuminate Settings" topic for details. |
How is the AI system trained? | Anthology is not involved in the training of the Anthropic Claude and Amazon Titan models that power the Illuminate Data Q&A functionalities through Amazon Bedrock. These models are trained by Anthropic (Anthropic Claude) and Amazon (Amazon Titan), respectively. Anthology does not further refine these models using our own or our clients’ data. |
Is client data used for (re)training the AI system? | No. Amazon commits in its public documentation to not use any input into, or output of, for the retraining of the large language models within Amazon Bedrock. Amazon provides information about how the large language models are trained in the links in the "Further information" section below. Any data input and output is encrypted in transit and at rest. |
How does Anthology use personal information with regard to the provision of the Illuminate Data Q&A? | Anthology only uses the information collected in connection with Illuminate Data Q&A to provide, maintain, and support the Illuminate Data Q&A and where we have the contractual permission to do so. You can find more information about Anthology’s approach to data privacy in our Trust Center. |
In the case of a third-party supported AI system, how will the third party use personal information? | Only limited user prompts and summary data from the dashboard visuals are passed from Amazon Q for QuickSight to Amazon Bedrock to provide the feature. Neither Amazon nor any Amazon Bedrock model provider use any Anthology data or Anthology customer data to improve the Amazon Bedrock models. You can find more information about the data privacy practices regarding Amazon Bedrock Security and Privacy in the Amazon documentation on the AWS AI Security Page, the AWS Machine Learning Blog, the Amazon Q in QuickSight overview, AWS announces Amazon Q in QuickSight (blog article), and the Amazon Bedrock Security and Privacy page. |
Was accessibility considered in the design of the Illuminate Data Q&A? | The Illuminate Data Q&A feature leverages Amazon QuickSight. Since this is a third-party feature, accessibility was considered by Amazon rather than Anthology. Amazon makes accessibility conformance reports available via the AWS Accessibility pages. |
Considerations and recommendations for institutions
Intended use cases
The Illuminate Data Q&A is only intended to support the functionalities listed above. These functionalities are provided to and intended for our clients’ instructors to support them with analysis and enhancements of Data Q&A reports.
Out of scope use cases
Illuminate Data Q&A currently does not provide a prompt functionality or similar functionality that can directly instruct the generative AI models. Because of this, we do not anticipate any unintended (out of scope) use cases.
Trustworthy AI principles in practice
Anthology and Amazon believe the lawful, ethical and responsible use of AI is a key priority. This section explains how Anthology and Amazon have worked to address the applicable risk to the legal, ethical, and responsible use of AI and implement the Anthology Trustworthy AI principles. It also suggests steps our clients can consider when undertaking their own AI and legal reviews of ethical AI risks of their implementation.
Transparency and Explainability
- We make it clear in the Illuminate administration configuration options that this is an AI-facilitated functionality.
- In the user interface for instructors, the Illuminate Data Q&A functionalities are clearly marked as "Generative" functionalities. Instructors are also requested to review the text output prior to use. The metadata of the output created by the Illuminate Data Q&A functionalities are exposed in each client’s Illuminate usage report with a field for auto-generated content. It also shows if the output was subsequently edited by the instructor.
- In addition to the information provided in this document on how the Illuminate Data Q&A feature works with Amazon Q in QuickSight, Amazon provides additional information about Amazon Q and Amazon Bedrock in the links in the "Further information" section below.
- We encourage clients to be transparent about the use of AI within the Illuminate Data Q&A and provide their instructors and other stakeholders as appropriate with the relevant information from this document and the documentation linked herein.
Reliability and accuracy
- We make it clear in the Illuminate Setting page of Illuminate that this is an AI-facilitated functionality that may produce inaccurate or undesired output and that such output should always be reviewed by the end-user.
- In the user interface, users are requested to review the text output for accuracy.
- As detailed in the Amazon Bedrock FAQs, there is a risk of inaccurate output. While the specific nature of the Illuminate Data Q&A and our implementation is intended to minimize inaccuracy, it is our client’s responsibility to review output for accuracy, bias, and other potential issues.
- Users can easily regenerate inaccurate output by rephrasing the question or phrase used in the Q&A prompt.
- As part of their communication regarding the Illuminate Data Q&A , clients should make their instructors aware of this potential limitation.
- Clients can report any inaccurate output to us using the channels listed in the introduction.
Fairness
- Large language models inherently present risks relating to stereotyping, over or underrepresentation and other forms of harmful bias.
- Given these risks, we have carefully chosen the Illuminate Data Q&A functionalities to avoid use cases that may be more prone to harmful bias or where the impact of such bias could be more significant.
- Nonetheless, it cannot be excluded that some of the output may be impacted by harmful bias. As mentioned above under "Accuracy," instructors are requested to review output, which can help to reduce any harmful bias.
- As part of their communication regarding the Illuminate Data Q&A , clients should make their instructors aware of this potential limitation.
- Clients can report any potentially harmful bias to us using the contact channels listed in the introduction.
Privacy and Security
- As described in the "Key facts" section above, only limited user prompts and summary data are used for the Illuminate Data Q&A and accessible to Amazon. The section also describes our and Amazon’s commitment regarding the use of any personal information.
- Amazon describes its data privacy and security practices and commitments in the documentation on Amazon Bedrock Security and Privacy.
Safety
- Large language models inherently present a risk of outputs that may be inappropriate, offensive, or otherwise unsafe. Amazon describes these risks in its Limitations section of Amazon’s Responsible AI page.
- Given these risks, we have carefully chosen the Illuminate Data Q&A functionalities to avoid use cases that may be more prone to unsafe outputs or where the impact of such output could be more significant.
- Nonetheless, it cannot be excluded that some of the output may be unsafe. As mentioned above in the "Accuracy" section, instructors are requested to review output, which can further help reduce the risk of unsafe output.
- As part of their communication regarding the Illuminate Data Q&A , clients should make their instructors aware of this potential limitation.
- Clients should report any potentially unsafe output to us using the channels listed in the introduction.
Humans in control
- To minimize the risk related to the use of generative AI for our clients and their users, we intentionally put clients in control of the Illuminate Data Q&A ’s functionalities. The Illuminate Data Q&A is therefore an opt-in feature. Administrators can activate or deactivate the Illuminate Data Q&A at any time.
- Additionally, instructors are in control of the output, meaning they are requested to review the output and can regenerate the output as needed.
- The Illuminate Data Q&A does not include any automated decision-making that could have a legal or otherwise significant effect on learners or other individuals.
- We encourage clients to carefully review this document including the information links provided herein to ensure that they understand the capabilities and limitations of the Illuminate Data Q&A and the underlying Amazon Q in QuickSight before they activate the Illuminate Data Q&A generative AI feature.
Value alignment
- Large language models inherently have risks regarding output that is biased, inappropriate or otherwise not aligned with Anthology’s values or the values or our clients and learners. Amazon describes these risks in the linked websites in the "Further Information" section below.
- Additionally, large language models (like every technology that serves broad purposes), present the risk that they can be misused for use cases that do not align with the values of Anthology, our clients or their end users, and those of society more broadly (for example, for criminal activities or to create harmful or otherwise inappropriate output).
- Given these risks, we have carefully designed and implemented our Illuminate Data Q&A functionalities in a manner to minimize the risk of misaligned output. We have also intentionally omitted potentially high-stakes functionalities.
Intellectual property
- Large language models inherently present risks relating to potential infringement of intellectual property rights. Most intellectual property laws around the globe have not fully anticipated nor adapted to the emergence of large language models and the complexity of the issues that come from their use. As a result, there is currently no clear legal framework or guidance that addresses the intellectual property issues and risks that arise from use of these models.
- It is ultimately our client’s responsibility to review the output generated by that Illuminate Data Q&A for any potential intellectual property rights infringement.
Accessibility
- We designed and developed the Illuminate Data Q&A with accessibility in mind just as we do throughout Learn and our other products. Before the release of the Illuminate Data Q&A , we purposefully improved the accessibility of the semantic structure, navigation, keyboard controls, labels, custom components, and image workflows, to name a few areas. We will continue to prioritize accessibility as we leverage AI in the future.
- The Illuminate Data Q&A feature leverages Amazon QuickSight. AWS makes accessibility conformance reports available via the AWS Accessibility pages.
Accountability
- Anthology has a Trustworthy AI program designed to ensure the legal, ethical, and responsible use of AI. Clear internal accountability and the systematic ethical AI review of functionalities such as those provided by the Illuminate Data Q&A are key pillars of the program.
- To deliver the Illuminate Data Q&A , we partnered with Amazon to leverage the Amazon Q in QuickSight. AWS is committed to the responsible use of AI.
- Clients should consider implementing internal policies, procedures, and review of third-party AI applications to ensure their own legal, ethical, and responsible use of AI. This information is provided to support our clients’ review of the Illuminate Data Q&A generative AI features.
Further information
- Anthology’s Trustworthy AI approach
- Amazon’s Responsible AI page
- Amazon’s Bedrock FAQs
- Amazon’s page on Bedrock Security and Privacy
- Amazon’s AI Security Page
- Amazon’s Machine Learning Blog
- Amazon’s Q in QuickSight overview
- AWS announces Amazon Q in QuickSight (blog article)