Compass' Ryan Glynn on Why LLMs Shouldn't Make Security Decisions — But Should Power Them
Tue Jan 27 2026
Ryan Glynn, Staff Security Engineer at Compass, has a practical AI implementation strategy for security operations. His team built machine learning models that removed 95% of on-call burden from phishing triage by combining traditional ML techniques with LLM-powered semantic understanding.
He also explores where AI agents excel versus where deterministic approaches still win, why tuning detection rules beats prompt-engineering agents, and how to build company-specific models that solve your actual security problems rather than chasing vendor promises about autonomous SOCs.
Topics discussed:
Language models excel at documentation and semantic understanding of log data for security analysis purposesUsing LLMs to create binary feature flags for machine learning models enables more flexible detection engineeringAgentic SOC platforms sometimes claim to analyze data they aren't actually querying accurately in practiceTuning detection rules directly proves more reliable than trying to prompt-engineer agent analysis behaviorIntent classification in email workflows helps automate triage of forwarded and reported phishing attempts effectivelyCustom ML models addressing company-specific burdens can achieve 95% reduction in analyst workload for targeted problemsAlert tagging systems with simple binary classifications enable better feedback loops for AI-assisted detection tuningContext gathering costs in security make efficiency critical when deploying AI agents across diverse data sourcesQuery language complexity across SIEM platforms creates challenges for general-purpose LLM code generation capabilitiesExplainable machine learning models remain essential for security decisions requiring human oversight and accountabilityListen to more episodes:
Apple
Spotify
YouTube
Website
More
Ryan Glynn, Staff Security Engineer at Compass, has a practical AI implementation strategy for security operations. His team built machine learning models that removed 95% of on-call burden from phishing triage by combining traditional ML techniques with LLM-powered semantic understanding. He also explores where AI agents excel versus where deterministic approaches still win, why tuning detection rules beats prompt-engineering agents, and how to build company-specific models that solve your actual security problems rather than chasing vendor promises about autonomous SOCs. Topics discussed: Language models excel at documentation and semantic understanding of log data for security analysis purposesUsing LLMs to create binary feature flags for machine learning models enables more flexible detection engineeringAgentic SOC platforms sometimes claim to analyze data they aren't actually querying accurately in practiceTuning detection rules directly proves more reliable than trying to prompt-engineer agent analysis behaviorIntent classification in email workflows helps automate triage of forwarded and reported phishing attempts effectivelyCustom ML models addressing company-specific burdens can achieve 95% reduction in analyst workload for targeted problemsAlert tagging systems with simple binary classifications enable better feedback loops for AI-assisted detection tuningContext gathering costs in security make efficiency critical when deploying AI agents across diverse data sourcesQuery language complexity across SIEM platforms creates challenges for general-purpose LLM code generation capabilitiesExplainable machine learning models remain essential for security decisions requiring human oversight and accountabilityListen to more episodes: Apple Spotify YouTube Website