Responsibilities:
- Conduct comprehensive security assessments for internal and external generative AI systems
- Develop and execute security testing methodologies for Generative AI models
- Assist with setting up local language models using tools like ollama
- Identify and document potential vulnerabilities in Generative AI systems
- Analyze Generative AI model risks, including prompt injection, data leakage, and adversarial attacks
- Create detailed technical reports on Generative AI security findings and configuration guidelines
- Collaborate with internal AI development teams to implement security recommendations
- Research emerging AI security threats and mitigation strategies
Requirements:
- Degree in Cybersecurity, Computer Science or the equivalent with minimum 1 year of relevant experience in IT, cybersecurity and risk management
- Basic understanding of machine learning and AI principles
- Sound knowledge in Generative AI models
o GPT
o Llama
o Claude
o DeepSeek
· Proficiency in:
o Python
o Cybersecurity tools
o Machine learning model analysis
o Vulnerability assessment techniques
· Familiarity with:
o AI model testing frameworks
o Adversarial machine learning concepts
o Cloud security principles
o Running or setting up locally run models (ie ollama)
o Prompt engineering techniques