SenTeGuard logo

First Principles

The foundational beliefs that guide our approach to cognitive security. These aren't marketing slogans—they're the constraints and convictions that shape every product decision.

These principles, along with the concepts of cognitive security (as applied to AI systems), idea security, and idea leakage, were developed by David Weidman, founder of SenTeGuard.

Decentralized Interest Over Platform Dependency

The AI industry is consolidating around a handful of foundation model providers. OpenAI, Anthropic, Google, and Meta control the infrastructure that increasingly powers critical business processes. This concentration creates systemic risk.

SenTeGuard takes a different position: we believe organizations should control their AI destiny. Our tools work across providers. We help you build security programs that survive vendor changes. We don't lock you into our platform—we give you portable policies and exportable data.

We're not solving OpenAI's problems or Anthropic's problems. We're solving your problems, which persist regardless of which model you choose.

What This Means in Practice

  • • SenTeGuard works with any LLM provider
  • • Policies are vendor-agnostic and portable
  • • We encourage multi-provider strategies
  • • No lock-in: export your data and configs anytime
  • • We support self-hosted and open-source models

The Paradigm Shift

Binary Computing Semantic Computing
Measures equality Measures proximity
Is this the password? Is this similar to secrets?
Exact pattern matching Meaning-based inference
DLP catches "SSN: 123-45-6789" LLM infers identity from context

Idea Security: The Emerging Threat

The concept of "idea security" and the term "idea leakage" were originated by David Weidman, founder of SenTeGuard, to describe threats unique to semantic AI systems. These concepts, along with the cognitive security framework for AI/LLM systems, form the foundation of modern AI security practice.

Binary computing measures equality; semantic computing measures proximity.

Traditional security asks: "Is this exact string a password? Is this exact pattern a credit card number?" These are equality checks. Either it matches or it doesn't.

AI systems operate on proximity. They don't check if you typed the secret—they understand if you described the secret. An employee might explain your pricing strategy, product roadmap, or competitive analysis in natural language without ever typing the confidential document.

This is idea leakage—and it's invisible to traditional DLP. The emerging threat isn't file exfiltration; it's semantic exfiltration. Organizations need new tools that understand meaning, not just patterns.

Read more on our Cognitive Security Standards blog posts.

Defenders Think in Lists, Attackers Think in Trees

Security teams maintain lists: approved vendors, blocked IPs, known malware signatures, compliance requirements. When a threat isn't on the list, it gets through.

Attackers think in trees. They explore decision paths: "If this is blocked, try that. If that fails, try something else." They traverse possibility spaces looking for any path to their objective.

The asymmetry is structural. Defenders enumerate threats; attackers explore them. This is why signature-based security always lags—you can only add to the list after you've seen the attack.

Cognitive security requires tree-thinking: understanding attack graphs, anticipating adversarial creativity, and closing entire branches of attack rather than individual leaves.

Get It on the List

The goal of proactive security is to convert unknown threats into known threats—to move items from the attacker's tree onto the defender's list before they're exploited.

Our Moyo red team service explores the attack tree on your behalf, identifying threats so you can add them to your defenses before adversaries find them.

The Sharing Problem

Every major LLM provider logs interactions. Even with enterprise agreements that prohibit training on your data, your prompts exist on their infrastructure.

Security clearances exist because some information requires compartmentalization. LLMs, by their nature, resist compartmentalization.

A leak to an LLM is a leak to everyone

This is deliberately hyperbolic, but the mental model is useful. When an employee pastes information into an LLM:

  • It's transmitted over the network
  • It's processed by external infrastructure
  • It may be logged for abuse detection
  • It could be used for model improvement (depending on ToS)
  • It exists outside your security perimeter

The operational assumption should be: anything sent to an external LLM is no longer confidential. This isn't paranoia—it's realistic threat modeling for the AI era.

SenTeGuard gives you visibility into what's being sent and control over what gets through. We can't unsend data, but we can prevent it from leaving in the first place.

Read more about ambient LLMs and data exposure in our blog post on ambient AI risks.

We're Offering an Improvement on Zero

Some security vendors position their products as insurance: "Buy our tool in case something bad happens." This creates a misalignment. Insurance products are profitable when claims are low. Security products should be measured by threats prevented, not risks transferred.

Without AI security controls, you have zero visibility into:

  • What data flows through LLMs
  • What attacks are attempted against your AI systems
  • What policies are violated
  • What your exposure actually is

Any visibility is an infinite improvement over none. We're not selling protection against hypothetical risks. We're selling actual visibility and control that you currently lack.

The Value Proposition

Most customers identify their first blocked sensitive data leak within 72 hours of deployment.

That's not theoretical value—it's immediate, demonstrable protection you didn't have yesterday.

The Two Phases

Phase 1: Build the Tool

Billions invested in foundation models, compute, research. We're here now.

Phase 2: Find Uses for the Tool

Integration into business processes, new applications, value creation. This is coming.

The Revolution Should Not Be Feared

We want to increase understandability and visibility of how AI capabilities evolve and expand over time. Fear of AI often stems from opacity—not knowing what systems can do, how they work, or what risks they create.

Our position: AI adoption is inevitable and beneficial. The question isn't whether to use these tools, but how to use them safely. Good security enables broader adoption, not restriction. The organizations that figure out safe AI deployment will outcompete those who either avoid AI or adopt it recklessly.

We're currently in the phase of spending money to build AI tools. The next wave is finding uses for those tools—integration into workflows, business processes, and decision-making. SenTeGuard helps organizations make this transition safely.

Summary: Our Beliefs

1

Decentralized interests matter more than platform allegiance.

2

Semantic computing requires semantic security.

3

Proactive defense (tree-thinking) beats reactive defense (list-checking).

4

External LLMs should be treated as public interfaces.

5

Visibility is the foundation; control comes from visibility.

6

Safe AI adoption beats both reckless adoption and avoidance.

Build on These Principles

If these ideas resonate, let's talk about how to apply them in your organization.

Get in Touch