Amit Kumar — CTO
🔐 About Me AI Security Architect | Adversarial ML | LLM/RAG Security | AI Privacy & Governance | MLSecOps Lead | Technical Lead – AI Security | Technical AI Controls Lead I architect and secure AI systems across the full lifecycle — from threat-aware model development and adversarial training to red teaming, runtime defense, and regulatory-grade governance. My work protects modern AI ecosystems — including LLMs, RAG pipelines, multi-modal models, and generative adversarial networks (GANs) — against real-world threats such as prompt injection, LoRA exploitation, RAG/embedding poisoning, model extraction, backdoors, and GAN-based evasion. I blend hands-on security engineering with AI governance frameworks (e.g., GDPR, ISO/IEC 42001, NIST AI RMF) to build resilient, transparent, and privacy-preserving AI for high-stakes sectors like finance, healthcare, and critical infrastructure. 🛡️ Core Expertise Areas ✅ AI Security Engineering & MLSecOps Adversarial ML Defense: FGSM, PGD, CW, DeepFool, Boundary Attacks Backdoor & Trojan Detection: Neural Cleanse, Spectral Signature, STRIP Model Theft Prevention: PRADA, CopyCat, Black-box Hardening RAG Pipeline Defense: Embedding Poisoning, Corpus Tampering, Retrieval Guardrails LLM Guardrails: NeMo (Colang), Bedrock, Prompt Filtering, Context Sanitization LoRA/PEFT Security: Fine-tuning abuse detection, Parameter Injection Monitoring ✅ GenAI Threat Detection, Red Teaming & SIEM Integration Prompt Injection & Jailbreak Detection: Canary Prompts, LLM Threat Indicators LLM/RAG Telemetry & Attack Surface Monitoring: OpenTelemetry, Tracing, Abuse Signatures SIEM Integration: Splunk, ELK, LLM-Specific Alerting Pipelines Incident Response Automation: Auto-blocking, API Threat Modeling, Model Quarantine Logic Attack Simulation Tools: AdvBench, TREx, RobustBench, Polygraph ✅ AI Privacy & Responsible AI Governance Differential Privacy: OpenDP, TensorFlow Privacy, DP-SGD Explainability & Fairness: SHAP, LIME, Fairlearn, AIF360 Model Lifecycle Governance: MLflow, Model Cards Toolkit, DPIA Traceability Compliance Alignment: GDPR, ISO/IEC 42001, NIST AI RMF, OECD AI Principles Audit-Ready Logging: Inference Audit Trails, Data Access Logs, PII Minimization ✅ Cloud-Native AI Security & Deployment Secure AI Workloads: Kubernetes, Docker, Istio (mTLS, Rate Limiting, JWT Auth) API & Identity Protection: OAuth2, OpenID Connect, JWT, RBAC, API Gateway WAFs AI Supply Chain Security: Model Registry Verification, Hash Signing, SBOMs Multi-Cloud GenAI Security: AWS Bedrock, Vertex AI, Azure OpenAI with Guardrails
Stackforce AI infers this person is a specialized AI Security Architect with expertise in adversarial ML and compliance-driven AI governance.
Location: Pune, Maharashtra, India
Experience: 11 yrs
Skills
- Ai Security Engineering
- Mlsecops
- Ai Privacy & Governance
Career Highlights
- Expert in AI Security Engineering and MLSecOps.
- Proven track record in adversarial ML defense.
- Strong background in AI privacy and governance compliance.
Work Experience
SAP Fioneer
AI Security Lead engineer (5 mos)
A.P. Moller - Maersk
Senior AI Security Engineer (3 yrs 4 mos)
Cloud Security Engineer (4 yrs 5 mos)
Security Engineer (2 yrs 5 mos)
FIS
Senior Network Engineer (1 yr 1 mo)
Infosys Limited
Senior Network Engineer (1 yr 9 mos)
HCL
Associate Engineer (11 mos)
Education
Bachelor of Engineering - CGPA 8.2(out of scale 10) at Shri Dharmasthala Manjunatheshwara College of Engineering and Technology (SDMCET)
Bachelor of Engineering - BE at Visvesvaraya Technological University