top of page
Search

AI Service Agent - Sensitive Data "What me worry?"

  • mcoppert
  • Aug 12
  • 2 min read

Updated: Aug 17


ree

Managing Sensitive Data with AI Agents

Moving AI Beyond Basic Support

Support teams are no longer limiting AI agents to simple FAQs. With the right guardrails, AI is now helping resolve more complex, sensitive issues.

  • Intercom’s AI Agent Fin now handles 92% of incoming chats and resolves 78% of them

  • Teams are enabling AI to assist with account-specific issues like billing, usage questions, and refunds—but only in controlled, scoped ways

  • Intercom’s own support team starts with narrow use cases and gradually expands as confidence and safeguards grow


 

Sensitive Data Is Broader Than You Think

AI workflows are increasingly touching sensitive data across more industries, formats, and contexts than teams may expect.

  • Beyond obvious data like IDs and banking info, even eyeglass prescriptions and car loan approvals introduce regulatory risk

  • Inputs can include both files (e.g., documents) and plain-text fields (e.g., Social Security numbers or SIM IDs)

  • Teams must assess not just what’s collected, but how, where it’s stored, and who can access it


 

Designing for Security from the Ground Up

Building secure AI workflows means more than compliance—it’s about thoughtful architecture and real-world tradeoffs.

  • Never launch workflows involving sensitive data without approval from internal security and compliance teams

  • Intercom requires login status and permission checks before Fin can surface user-specific dataUse a “think big, start small” approach—prioritize use cases where the business value justifies the effort


 

Where AI Stops and Humans Step In

AI is transforming the efficiency of support—but human judgment is still essential, especially in high-risk situations.

  • AI should handle the upfront heavy lifting: collecting inputs, validating formats, and routing securely

  • For high-stakes decisions—like granting account access or interpreting ID documents—humans still play a critical role

  • In the future, multiple AI agents may collaborate behind the scenes, each with defined scopes and access levels


 

What Still Goes Wrong: Human Error

Despite AI's power, most security risks stem from old-fashioned mistakes in implementation and process.

  • Common missteps: over-permissioned accounts, insecure credential storage, and poor handoff logic

  • A poorly trained or biased AI model introduces risk—and often erodes trust faster than human mistakes

  • If secure workflows aren’t easy to use, people will default to unsafe behaviors


 
 
 

Comments


bottom of page