Intermediate

Dependency and Third-Party Risk

Lesson 2 of 4 Estimated Time 50 min

Dependency and Third-Party Risk

Managing Risk from External Dependencies

Your AI system depends on many external components: LLM APIs, embedding models, frameworks, libraries, and services. Each introduces risk.

Dependency Inventory

class DependencyInventory:
    def __init__(self):
        self.dependencies = {
            'llm_providers': [
                {'name': 'OpenAI', 'api': 'gpt-4', 'criticality': 'CRITICAL'},
                {'name': 'Anthropic', 'api': 'claude-3', 'criticality': 'HIGH'},
            ],
            'frameworks': [
                {'name': 'transformers', 'version': '4.30.0', 'criticality': 'CRITICAL'},
                {'name': 'langchain', 'version': '0.0.300', 'criticality': 'HIGH'},
            ],
            'plugins': [
                {'name': 'memory_plugin', 'source': 'github.com/user/repo', 'trusted': True},
                {'name': 'embedding_plugin', 'source': 'huggingface', 'trusted': True},
            ],
        }

    def audit_dependencies(self):
        """Audit all dependencies for security."""

        findings = []

        for dep_type, deps in self.dependencies.items():
            for dep in deps:
                # Check for known vulnerabilities
                vulns = self.check_vulnerabilities(dep['name'], dep.get('version'))

                if vulns:
                    findings.append({
                        'dependency': dep['name'],
                        'type': dep_type,
                        'vulnerabilities': vulns,
                        'criticality': dep['criticality'],
                    })

        return findings

    def check_vulnerabilities(self, name, version=None):
        """Check for known vulnerabilities in dependency."""

        # Query vulnerability database
        # (In practice, use NVD, GitHub Security Advisory, etc.)

        return []

Risk: Vulnerable Dependencies

Attack: Your system uses a library with known vulnerabilities. Attacker exploits the vulnerability.

class VulnerableDependencyExample:
    """Example: Using vulnerable version of transformers library."""

    def vulnerable_code(self):
        # Using old vulnerable version
        import transformers  # Version 4.20.0 has RCE vulnerability

        # Attacker can exploit this vulnerability
        # to execute arbitrary code

        return transformers

Defense: Dependency Management

class SecureDependencyManagement:
    def __init__(self):
        self.approved_versions = {
            'transformers': '>=4.30.0',  # Only approved versions
            'torch': '>=2.0.0',
            'langchain': '>=0.1.0',
        }

        self.prohibited_packages = [
            'pickle',  # Unsafe deserialization
            'exec',    # Dynamic code execution
        ]

    def validate_dependencies(self, requirements_file):
        """Validate all dependencies meet security requirements."""

        import pkg_resources

        issues = []

        for req in pkg_resources.parse_requirements(open(requirements_file)):
            package_name = req.project_name
            installed_version = pkg_resources.get_distribution(package_name).version

            # Check if version is approved
            if package_name in self.approved_versions:
                spec = self.approved_versions[package_name]
                if not self.version_satisfies(installed_version, spec):
                    issues.append({
                        'package': package_name,
                        'installed': installed_version,
                        'required': spec,
                        'issue': 'Version not approved'
                    })

            # Check if package is prohibited
            if package_name in self.prohibited_packages:
                issues.append({
                    'package': package_name,
                    'issue': 'Prohibited package'
                })

        return issues

    def version_satisfies(self, version, spec):
        """Check if version satisfies requirement."""

        # Simplified version checking
        return True

Risk: Compromised Third-Party API

Attack: API provider is compromised or returns malicious data.

class APIProviderRiskMitigation:
    def call_external_api(self, api_name, parameters):
        """Call external API with safety measures."""

        try:
            # Timeout prevents hanging
            response = requests.get(
                f'https://api.example.com/{api_name}',
                json=parameters,
                timeout=10
            )

            # Validate response format
            data = response.json()

            if not self.validate_response(data):
                raise ValueError("Invalid API response")

            return data

        except Exception as e:
            # Fall back to safe default
            return self.get_safe_default_response()

    def validate_response(self, data):
        """Validate API response is reasonable."""

        # Check schema
        if 'result' not in data:
            return False

        # Check for suspicious content
        if 'malicious' in str(data).lower():
            return False

        # Check size
        if len(str(data)) > 1000000:
            return False

        return True

    def get_safe_default_response(self):
        """Return safe default if API fails."""

        return {
            'result': 'Service unavailable',
            'use_cached': True
        }

Risk: Plugin Security

Attack: Plugins or extensions contain vulnerabilities or are malicious.

class PluginSecurity:
    def __init__(self):
        self.approved_plugins = {
            'memory_plugin': {
                'source': 'https://github.com/official/memory',
                'hash': 'abc123...',
                'permissions': ['read_memory', 'write_memory'],
            }
        }

    def validate_plugin(self, plugin_path):
        """Validate plugin before loading."""

        # Check source
        if not self.is_trusted_source(plugin_path):
            raise SecurityError("Plugin source not trusted")

        # Check hash
        if not self.verify_hash(plugin_path):
            raise SecurityError("Plugin hash mismatch")

        # Check permissions
        permissions = self.analyze_plugin_permissions(plugin_path)

        suspicious = self.find_suspicious_permissions(permissions)

        if suspicious:
            raise SecurityError(f"Plugin requests suspicious permissions: {suspicious}")

        return True

    def analyze_plugin_permissions(self, plugin_path):
        """Analyze what plugin accesses."""

        # In practice, static analysis of plugin code
        # Check for file I/O, network access, etc.

        return []

    def find_suspicious_permissions(self, permissions):
        """Identify concerning permissions."""

        suspicious = ['execute_code', 'access_network', 'delete_files']

        return [p for p in permissions if p in suspicious]

Supply Chain Security Best Practices

PracticeImplementation
Inventory all dependenciesDocument all external components
Regular auditsCheck for vulnerabilities quarterly
Approved versionsMaintain list of approved dependency versions
Hash verificationVerify package hashes
Principle of least privilegeLimit what external code can do
SandboxingRun external code in restricted environment
MonitoringDetect unexpected behavior from dependencies
Incident responsePlan for compromised dependencies

Key Takeaway

Key Takeaway: Third-party dependencies introduce supply chain risk. Maintain a complete inventory, audit regularly for vulnerabilities, use approved versions, verify integrity with hashes, and sandbox external code.

Exercise: Secure Your Supply Chain

  1. Create dependency inventory for your AI system
  2. Scan for vulnerabilities in all dependencies
  3. Define approved versions for each dependency
  4. Implement validation to enforce approved versions
  5. Add hash verification for critical components
  6. Set up monitoring for suspicious behavior from dependencies

Next Lesson: Model Provenance and Integrity—tracking model lineage and ensuring authenticity. EOF