Skip to content

emisso-ai/emisso-security

Repository files navigation

@emisso/security

AI-powered security scanner for codebases and pull requests. Uses LLMs (Claude, Codex, Gemini) to detect vulnerabilities, hardcoded secrets, and dependency issues.

When to Use This

  • Before merging PRs — Catch security issues before they reach production
  • In CI/CD pipelines — Automated security scanning via GitHub Actions
  • During code reviews — Get a second opinion from an AI security researcher
  • For threat modeling — Generate STRIDE threat models for your architecture
  • For compliance — Produce SARIF reports compatible with GitHub Code Scanning

Packages

Package Description Install
@emisso/security Core scanning engine npm install @emisso/security
@emisso/security-cli Command-line interface npm install -g @emisso/security-cli
@emisso/security-action GitHub Action uses: emisso-ai/emisso-security@v0.1

Quick Start

SDK

import { scan, formatSarif, formatMarkdown } from '@emisso/security';

// Scan a repository
const result = await scan({
  path: '/path/to/repo',
  analyzers: ['sast', 'secrets', 'dependencies'],
  provider: 'claude',        // or 'codex', 'gemini'
  model: 'claude-sonnet-4-6', // optional model override
  verify: true,               // 2-step self-verification
});

// Access findings
console.log(`Found ${result.summary.total} issues`);
console.log(`Critical: ${result.summary.critical}, High: ${result.summary.high}`);

for (const finding of result.findings) {
  console.log(`[${finding.severity}] ${finding.title}`);
  console.log(`  File: ${finding.location.file}:${finding.location.startLine}`);
  console.log(`  CWE: ${finding.cwe ?? 'N/A'}`);
  console.log(`  Verified: ${finding.verified}`);
  if (finding.fix) {
    console.log(`  Fix: ${finding.fix.description}`);
  }
}

// Export as SARIF (for GitHub Code Scanning)
const sarif = formatSarif(result);
await fs.writeFile('results.sarif', JSON.stringify(sarif, null, 2));

// Export as Markdown report
const report = formatMarkdown(result);
await fs.writeFile('SECURITY-REPORT.md', report);

CLI

# Scan a local repository
emisso-security scan .
emisso-security scan /path/to/repo --analyzers sast,secrets --provider claude

# Output as SARIF
emisso-security scan . --output sarif --output-file results.sarif

# CI mode (exit code 1 on high/critical findings)
emisso-security scan . --ci --min-severity high

# Review a pull request
emisso-security review-pr owner/repo#42

# Generate a threat model
emisso-security threat-model .

# Initialize config file
emisso-security init

GitHub Action

name: Security Scan
on:
  pull_request:
    branches: [main]

jobs:
  security:
    runs-on: ubuntu-latest
    permissions:
      security-events: write
    steps:
      - uses: actions/checkout@v4

      - name: Run Emisso Security Scan
        uses: emisso-ai/emisso-security@v0.1
        with:
          analyzers: sast,secrets
          provider: claude
          anthropic-api-key: ${{ secrets.ANTHROPIC_API_KEY }}
          upload-sarif: true
          fail-on-severity: high

      - name: Upload SARIF
        uses: github/codeql-action/upload-sarif@v3
        with:
          sarif_file: security-results.sarif

Analyzers

Analyzer Detects
sast SQL injection, XSS, command injection, path traversal, SSRF, auth bypass, IDOR
secrets API keys (AWS, GCP, Stripe, GitHub, OpenAI), private keys, connection strings
dependencies Known CVEs in package versions, deprecated packages, dependency confusion

Security Rules

Rules are markdown files with YAML frontmatter:

---
name: SQL Injection
severity: critical
analyzer: sast
cwe: CWE-89
owasp: "A03:2021"
languages: [typescript, javascript, python]
tags: [injection, database]
---

Detection instructions for the LLM...

Built-in rules cover OWASP Top 10 and common secret patterns. Add custom rules by creating markdown files and pointing to them:

emisso-security scan . --rules ./my-rules/

Self-Verification

When verify: true (default), each finding goes through a second LLM pass where the AI adversarially tries to disprove its own finding. This reduces false positives by:

  1. Checking for existing mitigations (middleware, framework protections)
  2. Verifying the attack path is realistic
  3. Confirming the severity is calibrated correctly

Findings that survive verification are marked verified: true.

Output Formats

Format Use Case Flag
JSON Programmatic consumption --output json
SARIF 2.1.0 GitHub Code Scanning --output sarif
Markdown Human-readable reports, PR comments --output markdown

Providers

Provider SDK/CLI Model Default Best For
Claude Claude Agent SDK / claude CLI claude-sonnet-4-6 Complex reasoning, cross-file analysis
Codex codex CLI codex-mini Fast scanning, cost-efficient
Gemini gemini CLI gemini-2.5-pro Large codebases (1M context)

Configuration

Create .emisso-security.yml in your repo root:

provider: claude
analyzers:
  - sast
  - secrets
verify: true
min_severity: low
exclude:
  - node_modules/**
  - dist/**
  - "*.lock"

Or generate it:

emisso-security init

API Reference

scan(request: ScanRequest): Promise<ScanResult>

Main entry point. Runs analyzers against a repository and returns structured findings.

ScanRequest:

Field Type Default Description
path string Required Repository path
analyzers AnalyzerType[] ['sast', 'secrets'] Analyzers to run
provider ProviderName 'claude' LLM provider
model string Provider default Model override
verify boolean true Self-verification
minSeverity Severity 'low' Minimum severity
maxTurns number 30 Max LLM turns
timeoutMs number 120000 Timeout per analyzer

ScanResult:

Field Type Description
findings Finding[] All findings, sorted by severity
summary object Counts by severity level
usage Usage Token counts, cost, duration
success boolean Whether scan completed
errors object[] Non-fatal errors

formatSarif(result: ScanResult): SarifDocument

Convert scan results to SARIF 2.1.0 format for GitHub Code Scanning.

formatMarkdown(result: ScanResult, options?): string

Generate a human-readable Markdown security report.

formatJson(result: ScanResult, pretty?): string

Serialize scan results as JSON.

Cost Estimates

Provider Model ~Cost per Scan
Claude Sonnet 4.6 $0.05-0.20
Claude Haiku 4.5 $0.01-0.05
Codex codex-mini $0.02-0.10
Gemini 2.5 Pro $0.03-0.15

Costs vary based on repository size and number of analyzers.

FAQ

Q: Does my code leave my machine? A: Code is sent to the LLM provider's API for analysis. If this is a concern, use a local LLM setup or ensure your provider agreement covers data privacy.

Q: How does this compare to Semgrep/Snyk? A: Traditional SAST tools use pattern matching. @emisso/security uses LLM reasoning to understand context, reducing false positives and catching business logic flaws that patterns miss. Best used alongside traditional tools.

Q: Can I add custom rules? A: Yes. Create markdown files with YAML frontmatter describing what to detect. The LLM uses these as additional instructions.

Q: Which provider should I use? A: Claude for complex reasoning and cross-file analysis. Codex for speed. Gemini for very large repositories.

License

MIT

Contributing

Contributions welcome! Please read SECURITY.md for reporting vulnerabilities.

git clone https://github.com/emisso-ai/emisso-security
cd emisso-security
pnpm install
pnpm test:run

About

AI-powered security scanner for codebases and pull requests — SAST, secrets, dependencies, threat modeling

Resources

License

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors