Endpoint AI Security
This guide will help you understand AI technology, recognize risks, and follow practices to protect yourself and the university. Additionally, this page should highlight risks or concerns to an endpoint with AI tools.
Understanding AI Basics
What is an LLM?
A Large Language Model (LLM) powers tools like ChatGPT, Claude, and Gemini. The risks these tools introduce stem from how LLMs work.
An LLM is NOT:
- ❌ A database that looks up facts
- ❌ A system that "thinks" like a human
- ❌ Always accurate or truthful
An LLM IS:
- ✅ A pattern recognition system trained on massive text datasets
- ✅ A prediction engine that generates likely responses based on patterns
- ✅ Capable of "hallucinating" (confidently stating false information)
AI predicts patterns. Always verify critical information, especially for policy, compliance, or technical decisions.
Conversational AI vs. Agentic AI
Conversational AI Tools
Examples: ChatGPT, Claude, Google Gemini
- Limited to conversation in a chat window
- You manually copy/paste/upload information
- Cannot access your files or execute commands
- Risk Level: Lower (but still requires caution)
Agentic AI Tools
Examples: Cursor, Roo Code, Claude Code, OpenClaw (and others 🦞)
- Can execute commands on your computer
- Can read and write files on your system
- Can make API calls to external services
- Can chain multiple actions autonomously
- Risk Level: Higher (requires careful configuration)
Why this matters:
Agentic AI's power to automate also creates risks. It can access sensitive data, execute unintended commands, and send data externally without obvious indication.
The Lethal Trifecta
The major security concern with agentic AI is the Lethal Trifecta. Three risk factors that, when combined, create serious vulnerabilities. Removing just one of these risk factors can significantly reduce risk.
Private Data Access
The Risk: AI tools that can read your files may access sensitive data like student records, research data, passwords, or confidential information.
❌ DON'T: Give AI access to folders with student grades or protected information
Untrusted Content
The Risk: AI tools can be given content from an untrusted source. The source could be leveraging attacks like prompt injection to change how the agent is acting. In doing so, the agent may begin to do something you do not want it to do.
❌ DON'T: Allow any unvetted source or input mechanism to interact with the agent.
External Communication
The Risk: AI tools with access can send data to external servers, intentionally or through exploitation, potentially causing data exfiltration or compliance violations.
❌ DON'T: Allow for unfettered external access with mechanisms like Email or Google Drive.
Data Classification Reference
Before using AI with any data, understand the classification of the data you intend to use. You can review data classification here.
Smart AI Usage Guidelines
Key Considerations
- Know Your Data - Check classification before using AI
- Use Approved Tools - Default to TAMU Chat for university work
- Never Trust Blindly - Always verify AI outputs
- Think Before You Share - Consider implications of data sharing
- Keep Humans in the Loop - AI assists, humans decide
Quick Do's and Don'ts
✅ DO:
- Use TAMU Chat for university-related work
- Review AI-generated code before executing
- Verify AI outputs against trusted sources
- Keep sensitive data in protected locations
- Report security concerns to endpoint-security@tamu.edu
❌ DON'T:
- Upload student grades to public AI tools
- Give AI unrestricted file system access
- Execute AI commands without understanding them
- Share passwords or credentials with AI
- Use unapproved AI tools for sensitive work
Scenarios
Writing Code
✅ Safe:
- Use AI for boilerplate code and common patterns
- Ask AI to explain algorithms
- Generate unit tests
❌ Unsafe:
- Giving AI access to proprietary codebases
- Executing AI code without review
- Sharing credentials in code snippets
Research Assistance
✅ Safe:
- Summarize publicly available papers
- Help structure writing
- Generate research ideas
❌ Unsafe:
- Sharing unpublished research data
- Using AI with export-controlled research
- Uploading proprietary sponsor data
Teaching and Grading
✅ Safe:
- Generate example problems
- Create rubrics and assignment descriptions
- Get teaching strategy ideas
❌ Unsafe:
- Uploading student submissions to public AI
- Sharing student grades or performance data
- Using AI for final grading decisions
TAMU Chat
TAMU Chat is an LLM platform that enables access to different LLMs. Additionally, data classified equal to or lower than "University-Confidential" is supported. It's going to be safer, allow you to do more with data, and provide access to more models!
Why use it:
- ✅ Appropriate for university-internal data
- ✅ No data used for external AI training
Access: Visit TAMU AI Services to learn more or TAMU Chat to access the platform.
API Access
Create API keys to use protected models in external tools or agentic AI platforms.
- How to get your API key
- Review and understand the risks and concerns above before enabling agentic AI
Other Approved Tools
Review other AI options listed here: TAMU AI Services
Key Policies and Resources
Essential TAMU Policies
- Artificial Intelligence SAP - Artificial Intelligence Standard Administrative Procedure
- Security Controls Catalog - Comprehensive security requirements
- FERPA Compliance (SAP 13.02.99.M0.01) - Student data protection
- CUI Requirements (System Reg 15.05.02) - Controlled Unclassified Information
- Data Classification (DC-1 through DC-6) - Understanding data sensitivity
Additional Resources
- Why AI Security Matters - Technology Services news post
- TAMU IT AI Services - Official guidance and approved tools
- NIST AI Risk Management Framework - Industry best practices
- OWASP Top 10 for LLM Applications - Security risks and mitigations