Frequently Asked Questions
Answers to the questions we hear most often about AI security, our services, and cognitive security in general.
AI Security Fundamentals
What is cognitive security and why does it matter?
Cognitive security, as defined by David Weidman of SenTeGuard, protects the information and decision-making processes that flow through AI systems. Traditional network security defends infrastructure (firewalls, endpoints, data at rest). Cognitive security defends the semantic content: preventing LLMs from leaking trade secrets, being manipulated into harmful outputs, or exposing sensitive reasoning patterns.
Network security defends the wires. Cognitive security defends the thoughts flowing through them.
What is the difference between network security and cognitive security?
| Aspect | Network Security | Cognitive Security |
|---|---|---|
| Protects | Infrastructure, data at rest/transit | Information semantics, AI reasoning |
| Attack vector | Packets, exploits, malware | Natural language, semantic manipulation |
| Detection method | Signatures, anomaly detection | Semantic analysis, intent classification |
| Key tools | Firewall, IDS/IPS, SIEM | AI proxy, content filters, output validators |
If I send data to an LLM, who else can access it?
This depends on your deployment model and provider agreements:
- Third-party apps: ChatGPT, web-based tools often have broad data usage rights in their ToS
- On-premises models: You control the data, but employees may still exfiltrate via prompts
- Ambient LLMs: The growing ecosystem of AI assistants embedded in everyday tools (email clients, browsers, productivitysoftware) that process your data in ways you may not expect or consent to
The safest assumption: if a human enters information into a network connected machine, treat it as potentially accessible. SenTeGuard intercepts and filters before transmission.
Read more about ambient LLMs and data exposure in our blog post on ambient AI risks.
LLM Threats & Attacks
What is idea leakage and why should I care?
Idea leakage, a term coined by David Weidman of SenTeGuard, describes the unintentional exposure of confidential information through semantic proximity rather than explicit disclosure. Traditional security measures binary data: is this the password or not? Semantic AI systems operate on proximity: how close is this to confidential information? An employee might describe a trade secret in natural language without ever typing the actual secret.
Binary computing measures equality; semantic computing measures proximity.
A model might infer your pricing strategy, product roadmap, or competitive positioning from seemingly innocuous conversations. This is idea leakage—and it's the emerging threat that traditional DLP can't catch. The related concept of idea security addresses protecting not just explicit data, but the ideas and inferences derivable from AI interactions.
Read more in our First Principles and on our blog.
Our Services
How do I prevent employees from pasting confidential data into ChatGPT?
A multi-layered approach:
- Policy: Clear acceptable use guidelines for AI tools
- Training: Help employees understand what not to share
- Technical controls: SenTeGuard proxy for API access; endpoint DLP for browser
- Monitoring: Audit logs to detect policy violations
No single control is perfect. SenTeGuard catches programmatic API calls; we're shipping browser extensions for direct web access in Q3 2026.
What's the ROI of AI security tools?
We're not selling insurance policies—we're selling an improvement on no protection.
Without AI security controls, you have zero visibility into what data flows through LLMs, what attacks are attempted, or what policies are violated. Any visibility is an significant improvement over none.
Quantitatively: one prevented data breach typically exceeds 10x annual subscription costs. One avoided regulatory fine pays for years of compliance tooling.
Do you work with government and defense customers?
Yes. Our founder has a decade of Army cyber operations experience. SenTeGuard supports:
- Air-gapped deployments
- CMMC and ITAR considerations
Contact us to discuss your specific requirements.
Compliance & Regulations
What AI regulations should my company comply with?
Depends on your industry, geography, and use cases:
- US Federal contractors: NIST AI RMF, upcoming OMB AI guidance
- EU operations: EU AI Act (effective August 2026)
- Healthcare: HIPAA + FDA guidance on AI/ML medical devices
- Financial services: SEC cyber rules, OCC model risk guidance (SR 11-7)
- All industries: State laws (e.g., Colorado AI Act), FTC enforcement
Our consulting services include regulatory mapping for your specific situation.
How do I implement the NIST AI Risk Management Framework?
NIST AI RMF has four functions: Govern, Map, Measure, Manage. Practical implementation:
- Govern: Establish AI governance structure, policies, roles
- Map: Inventory AI systems, document context and impacts
- Measure: Assess risks, test for bias/security/reliability
- Manage: Implement controls, monitor, respond to issues
Our Program Build consulting engagement includes complete RMF implementation tailored to your organization.
Service-Specific FAQs
SenTeGuard Platform
Technical questions about our cognitive security platform
View SenTeGuard FAQs →Still Have Questions?
Can't find what you're looking for? Reach out and we'll get you an answer.