| ▲ | apimade an hour ago | |
The same way you write malware without it being detected by EDR/antivirus. Bit by bit. Over the past six weeks, I’ve been using AI to support penetration testing, vulnerability discovery, reverse engineering, and bug bounty research. What began as a collection of small, ad-hoc tools has evolved into a structured framework: a set of pipelines for decompiling, deconstructing, deobfuscating, and analyzing binaries, JavaScript, Java bytecode, and more, alongside utility scripts that automate discovery and validation workflows. I primarily use ChatGPT Pro and Gemini. Claude is effective for software development tasks, but its usage limits make it impractical for day-to-day work. From my perspective, Anthropic subsidizes high-intensity users far less than its competitors, which affects how far one can push its models. Although it's becoming more economical across their models recently, and I'd shift to them completely purely because of the performance of their models and infrastructure. Having said all that, I’ve never had issues with providers regarding this type of work. While my activity is likely monitored for patterns associated with state-aligned actors (similar to recent news reports you may have read), I operate under my real identity and company account. Technically, some of this usage may sit outside standard Terms of Service, but in practice I’m not aware of any penetration testers who have faced repercussions -- and I'd quite happily take the L if I fall afoul of some automated policy, because competitors will quite happily take advantage of that situation. Larger vuln research/pentest firms may deploy private infrastructure for client-side analysis, but most research and development still takes place on commercial AI platforms -- and as far as I'm aware, I've never heard of a single instance of Google, Microsoft, OpenAI or Anthropic shutting down legitimate research use. | ||