Rinki Sethi, chief security officer at Upwind Security and founding partner at Lockstep, brings a wealth of experience from senior security leadership roles at Twitter, Rubrik, Palo Alto Networks, and IBM. In this conversation from the Open Source Security Summit 2025 with Jon Swartz, senior content writer at Techstrong, Rinki shares insights on how AI is reshaping both offensive and defensive cybersecurity, the critical role of identity security, and what it takes to stay ahead in an evolving threat landscape.
The double-edged sword of AI
Jon Swartz: How have attackers leveraged AI to scale their operations beyond what was previously possible?
Rinki Sethi: Attackers are operating at startup scale and speed, and AI is lowering the barrier to entry significantly. Phishing has always been challenging to detect, but AI-generated emails that are personalized, localized, and error-free make attacks far more convincing. We're seeing polymorphic malware that can self-mutate to evade traditional detection systems. Social engineering, voice cloning, and deepfakes are reshaping business email compromise scenarios, making it harder to trust even a phone call.
AI is giving attackers a playbook to move faster, cheaper, and at scale than we've ever seen before.
Identity: The new perimeter
Jon Swartz: Has compromised identity surpassed malware as the most common initial access vector, and if so, why?
Rinki Sethi: Absolutely. Identity is the new perimeter. When you think about credentials, tokens, and session cookies, they're easier to steal than building malware. Once attackers are in, they bypass most endpoint defenses. Credential theft and session hijacking have far surpassed traditional malware as the front door for attackers. While malware is still relevant, identities open doors with less effort. Compromised identities are the attackers' shortcut to privilege. We’ve been seeing this for a while, which is why we're seeing so much focus on identity security in the market today.
The cloud's unforgiving nature
Jon Swartz: How are misconfigurations and over-permissioned accounts widening the blast radius of cloud-native breaches?
Rinki Sethi: The cloud is amazing and powerful, giving developers capabilities to unlock that they didn't have before, but it can also be pretty unforgiving. Over-permissioned identities are like master keys — one compromise can unlock everything. We know this, and we've seen breaches as a result. Misconfigurations often expose data or services unintentionally. Even attackers after Bitcoin mining, which still happens today, will get in through misconfigurations. Together, they let attackers pivot laterally faster than ever.
In the cloud, mistakes compound. Misconfigurations and identity sprawl turn into breaches.
AI as a force multiplier for defenders
Jon Swartz: What excites you about the use of AI in defensive security measures?
Rinki Sethi: What has burdened security leaders and engineers is that we have these tools with hard-coded detection rules, but what's missing is context. AI enables contextual detection, which is super exciting and where we're seeing tremendous value from an AI perspective in cyber. You move from reactive alerts to more predictive insights. It helps surface combinations that, together at runtime, can be really risky — like a risky identity plus a vulnerable workload plus an exposed secret. That's the kind of context you need and the kind of thing you want to take action on right away. That's what AI is really helping with.
However, blind trust in AI can introduce bias and false confidence. Too many teams assume that AI set it, so it must be right. That's not necessarily the case.
AI can be a complete force multiplier for defenders, but only if it's paired with human judgment today.
That might change down the line, where the human may not need to be in the loop, but I think today, with where AI is, you still have to keep an eye on it.
Runtime detection: The critical context
Jon Swartz: How does runtime detection factor into identity security, especially concerning lateral movement or session hijacking?
Rinki Sethi: Static identity checks aren't enough anymore.
Runtime tells you who's doing what at the moment, which is extremely important when it comes to session hijacking. Whether it's a valid session that suddenly shows anomalous behavior like impossible travel or unusual access patterns, that's where you want to know. If lateral movement starts, runtime telemetry is often the first to catch it.
Identity without runtime is like locking your front door but leaving the windows open.
The industry has shifted — it started with looking at configurations and locking that down. Now we're saying runtime is equally important to lock down.
The AI balancing act
Jon Swartz: Do you see AI as a net positive in SOC operations, or are we at risk of replacing analyst judgment with automation too soon?
Rinki Sethi: When you use AI to reduce alert fatigue, surface high-confidence signals, and speed investigations up, it’s a net positive. If we replace human intuition too early, we’ll miss the subtle context that only an analyst sees. That’s a risk we would run.
The SOC of the future is AI-assisted, not AI-replaced.
The future: Agentic AI with human oversight
Jon Swartz: Where should the industry draw the line between human oversight and machine autonomy in security workflows?
Rinki Sethi: I believe the future is going to be agentic, though we're a ways away from that. Because of the massive data explosion that will happen, we're going to need solutions uniquely catered to that data explosion.
Today, machines should recommend, not fully decide.
I think of it as a human-in-the-loop for escalation, compliance, and business-impacting actions. Automation is great for response playbooks, but not yet for nuanced judgment calls. Autonomy is fine for containment, but humans still own the consequences.
Jon Swartz: As autonomous as the system is, do you think there will always be some human oversight?
Rinki Sethi: There will always have to be a human in the loop with auditing involved. As you gain confidence with trained models and sophisticated agents, the bar on where humans need to intervene might change. But you're always going to have to have humans involved at some level. As you get more sophisticated, the bar might get higher, but there will still be human oversight.
Who wins the AI arms race?
Jon Swartz: Will AI ultimately tilt the balance of power towards defenders or attackers in the long run?
Rinki Sethi: In the short term, attackers are going to adapt faster. They have fewer rules. In the long term, I hope to see defenders win — that's why I'm in this game. We can apply AI across visibility, detection, and resilience at scale.
Attackers will win the battles, but AI can help defenders win the war if we move just as fast. That's the key.
One of the big issues, especially in large enterprises, is: how do we move quickly? Because companies are big with lots of regulations and mandates, it takes time to catch up. We're really going to have to think about the speed at which we move and where we stay agile versus where we have the right checks and balances.
Jon Swartz: On the flip side, if something were to go awry, there’s that great fear that AI going wrong could lead to legal action or some sort of corporate embarrassment.
Rinki Sethi: I think that’s absolutely right. Right now, there’s risk aversion, but that’s going to change.
Jon Swartz: Do you think we might see regulation in the U.S., or at least some sort of oversight within heavily regulated industries?
Rinki Sethi: I do think we’re going to see some regulation, but as we see with everything, regulation is going to trail adoption and will take some time.
Rapid fire insights
Q: What is one tool you couldn't live without as a CSO?
A: My network of peers is invaluable. I don't think there's any product that beats practitioner intelligence and that network.
Q: What's the first thing you check in a post-breach investigation?
A: Identity logs. Who got in, and what did they touch?
Q: What's the biggest buzzword in security right now that makes you roll your eyes?
A: Everything is an AI company right now. It's a legitimate buzzword, but there's also a lot of misuse of the word AI. Folks are packaging automation as AI. You have to dig deeper to understand what they mean by AI and what the actual impact is that will bring benefits to the company.
Q: Passkeys in five years: standard everywhere or still in transition?
A: Still in transition, unfortunately. I wish it were standard, but standards move more slowly than attackers. I've seen more standardization around passkeys in the past year than before, so hopefully, there'll be rapid scaling, but I think it's going to still be in transition.
Q: Will agentic AI reduce burnout for security teams or create more fires to fight?
A: Both. It's going to remove the grunt work but introduce new risks that we'll have to fight.