Career Openings
Senior – GenAI Cloud Security Architect
Part of the Cloud Security team, responsible for safeguarding an organization’s digital assets, data, and applications hosted in cloud environments. Our primary focus is to ensure the integrity, confidentiality, and resilience of AI models, specifically focusing on detecting and mitigating vulnerabilities, implementing robust encryption practices, and safeguarding against potential misuse or exploitation of generated outputs. The Team is based in Chicago, Toronto, and Hyderabad.
Responsibilities and Impact:
A cloud security team plays a vital role in enhancing an organization’s security posture within cloud environments. By proactively identifying and mitigating risks, safeguarding sensitive data, ensuring compliance, and responding to security incidents, the team not only reduces security threats but also instills confidence, supports innovation, and promotes business continuity, thereby contributing significantly to S&P Global’s success and resilience in the cloud era. The position is critical as the business is very dynamic and constantly evolving to Power the Markets of the Future.
- Develop and implement comprehensive AI/ML security strategies, policies, standards, and guidelines to protect organizational assets and ensure the secure operation of AI and ML systems
- Build security control framework and generic reference architectures for GenAI applications.
- Assist with identifying security requirements to be followed by LoB/Dev teams when building GenAI applications.
- Conduct threat modeling exercises to identify potential security risks and vulnerabilities in AI systems, working closely with AI development teams to integrate security into the design and development processes.
- Provide thought leadership and creativity to mature Gen AI security governance embedding into our existing cyber security risk appetite framework.
- Perform security assessments on AI applications and systems to identify and address vulnerabilities. Develop and implement testing methodologies to evaluate the security posture of AI models and frameworks.
- Develop configuration hardening guidelines for Cloud Services including native Generative AL/ML services such as AWS SageMaker, SageMaker Notebooks, Bedrock, Kendra, OpenSearch, Lambda, Azure Cognitive Services, Open AI, Google Cloud Platform Vertex AI etc.
- Stay updated on relevant regulations and standards related to AI security and ensure compliance. Collaborate with legal and compliance teams to align AI systems with industry and regulatory requirements.
Apply Now
Explore more career options
We are always eager to meet fresh talent. Check out other career options at Banking Labs.
View All Openings