2026 January Meeting Notes: AI Security and 2026 Vision
The 2026 Board and Vision
The January ISC2 Toronto Chapter meeting introduced the 2026 board and set the direction for the coming year. Jon Rohrich took the floor as President to outline his focus on inclusive programming. His goal is to make sure chapter events provide real value across every security role, from pen testers and system admins to CISOs and students.
He is joined by Vivian Odii as VP and Treasurer, who is prioritizing community impact and ensuring the chapter’s initiatives are properly resourced. Camille Kloppenburg stepped into the Director of Strategic Partnerships role. She will focus on building vendor sponsorships and expanding the external network to help the chapter grow.
Panelists and Speakers
The evening featured a great mix of chapter leadership and technical expertise:
- Jon Rohrich: Chapter President
- Vivian Odii: Vice President and Treasurer
- Camille Kloppenburg: Director of Strategic Partnerships
- Amaar Malik: Cybersecurity Expert at Accenture and our main presenter for the evening
- Active Chapter Members: Attendees who brought the conversation to life by sharing real-world insights and asking sharp questions.
The Reality of AI-Driven Threats
The discussion then moved to how AI is changing the threat landscape. Attackers are currently using AI to bypass the usual barriers to entry, and they are moving much faster than defenders who have to deal with organizational red tape.
The panel shared some sobering examples of this shift. A Fortune 500 finance employee recently transferred billions of dollars after being tricked by a highly convincing deepfake video call. In another incident, attackers used a cloned CEO voice to target a newly acquired company within an hour of the merger announcement. Threat actors are also using tools to generate complex malware frameworks in a matter of days instead of months. It is clear that traditional defensive strategies are no longer enough.
Updating the Defensive Playbook
Security teams need to adapt by using AI defensively while building entirely new guardrails. The panel spent a lot of time discussing the “Traffic Hop” agent. This is a specific security layer sitting right between the user and the AI model. Its only job is to scrub and block malicious inputs, like prompt injections, before they ever reach the core AI.
Input scrubbing is just the first line of defense. The group agreed that Identity and Access Management (IAM) is the single most critical security control for AI. Scaling AI safely requires treating these agents as non-human identities and using an automated orchestration framework to manage their permissions across enterprise systems. By enforcing the principle of least privilege at the identity level, organizations create a hard, structural barrier.
Basic infrastructure controls are also still mandatory. Deploying AI on cloud platforms requires standard network segmentation and data encryption to limit the damage if a compromise happens.
Governance, Testing, and the Human Element
Security teams should avoid being a roadblock to business innovation. Instead of writing overly restrictive policies, the focus should shift to providing safe and isolated testing environments for engineering teams.
The panel stressed the need for continuous adversarial testing. When pushed with thousands of tricky prompts, AI agents frequently hallucinated, broke policies, or generated inappropriate content. Developers rely on this testing feedback to refine models and implement filters that catch sensitive data leaks.
AI obviously brings huge efficiency gains to the table. For example, it can cut SOC alert triage times from an hour down to just a few minutes. Even with those gains, human oversight is still required. Teams need a human in the loop to provide context, address bias, and make difficult risk decisions in high stakes areas like insider risk or financial approvals.