Over the past year, we’ve worked closely with 18 government departments to better understand what makes an AI help application reliable, consistent, and useful for people using Canada.ca. We’ve taken what we’ve learnt and created guidance to support teams designing AI help applications for use on Canada.ca. The guidance focuses on meeting user needs while aligning with government standards.

Why this guidance exists

Evidence from our Trust studies suggested that it is important that all AI applications look and behave in a consistent manner across Canada.ca. Consistency helps users understand that they’re using a trustworthy service, that they’re interacting with an AI, not a human, and makes the experience predictable across all government programs and services.

In most cases, teams should be able to meet their users’ needs by using AI Answers. This service helps people get answers from information already published on Canada.ca and other Government of Canada websites and is expected to launch in FY 2026-27. However, in some cases, specific topics may still require a separate AI help application. While you can limit a chat application to a certain topic, keep the Canada.ca vision in mind as you experiment and design. The Canada.ca vision is one where users don’t need to know which department handles a specific task. Instead, they should be able to find the information they need seamlessly, regardless of departmental boundaries.

The new guidance provides a practical resource to help teams design topic-specific AI help applications that can’t yet be served by AI Answers. It is intended to help departments build tools that are reliable, accurate, transparent about being AI-driven, and consistent across Canada.ca. It also guides teams to relevant policies and procedures that they must follow.

What the guidance covers

The guidance brings together research evidence, policy, and practical design advice across five key areas. Here’s a brief overview of what you can find in each section.

Central guidance and AI research

All AI help applications must align with the Treasury Board’s Guide on the Use of Generative AI. The guidance reinforces the importance of:

  • identifying and managing risks
  • following established best practices
  • monitoring applications over time
  • applying the FASTER principles (Fair, Accountable, Secure, Transparent, Educated, and Relevant)

Teams must also follow the Digital Standards Playbook when creating an AI help application. The playbook guides teams on how to design digital services in a way that best serves Canadians.

This section also includes additional Government of Canada resources as well as those from other government and external organizations.

Design guidance

The guidance includes clear design recommendations to help users understand and use AI help applications more easily. In addition to following Canada.ca design requirements, applications should:

  • avoid human names to make it clear it’s an AI service, not a human
  • open in a new accessible browser window rather than appearing as a chat panel
  • be opened using links or bottom overlays instead of floating buttons or icons, especially to support mobile users

We will continue to update and evolve our design guidance as we learn more from AI Answers as well as from work done by other departments.

“Transparency about whether a client is interacting with a person or a chatbot is essential to ensure that the client is not misled and to maintain trust in government.”

- Guide on the use of generative artificial intelligence

Content guidance

Clear and responsible content is essential when designing AI help applications. The guidance emphasizes the need to:

  • clearly state that the user is interacting with an AI
  • ensure accessibility for all users
  • include advice that addresses privacy, limitations of the application and potential mistakes
  • provide accurate and reliable information
  • safeguard against harmful or biased outputs
  • include citations for sources

It also advises against using certain terms, such as “chat” or “now,” which may imply that a human is responding in real time.

Privacy and security considerations

Protecting user privacy and government systems is a core requirement when designing an AI help application. The guidance outlines expectations to:

  • minimize or completely block personal information collection
  • protect applications from misuse or manipulation
  • follow the Government of Canada’s Enterprise Cyber Security Strategy

These measures help ensure AI help applications remain safe and trustworthy.

Designing iteratively

Building an effective AI help application isn’t a one-time task, it’s an ongoing process. With the rapid pace of generative AI development, departments must experiment and learn quickly. The guidance encourages teams to:

  • conduct UX research early in the design process and through the entire lifecycle
  • share early versions with call centre teams and other subject-matter experts
  • use feedback to improve the application over time
  • measure success based on whether users’ needs are being met with accurate responses not just usage numbers

Remember, success is measured not by the number of people using an application, but by how many report that the application solved their problem and the accuracy of the answers they received as measured by subject matter experts.

Why this guidance matters

AI help applications can improve access to government information, but only if they’re designed responsibly and consistently. This guidance helps departments build tools that support users while aligning with government policies, accessibility requirements, and security standards.

Next steps

Departments planning to design an AI help application are encouraged to review the new guidance and consider how it applies to their work. By following a shared approach, we can provide people with AI tools that are clear, dependable, and easy to use across Canada.ca.

Read the new Guidance for the design of AI help applications on Canada.ca