
About this role
Full Time Senior Staff AI Application and Cloud Ops Engineer in SaaS at OpenGov in India | Pune. Apply directly through the link below.
At a glance
- Work mode
- Office
- Employment
- Full Time
- Location
- India | Pune
- Experience
- Senior · 10+ years
Core stack
- Contract Management
- Cross-functional
- Computer Science
- GitHub Actions
- Infrastructure
- Observability
- Architecture
- Performance
- Innovation
- Efficiency
- Compliance
- Accounting
- Salesforce
- Snowflake
- Debugging
- Budgeting
- Node.js
- FastAPI
- Logging
- Workday
- Python
- Docker
- Design
- React
- Flask
- CI/CD
- Agile
- Slack
- RAG
- ERP
Quick answers
What are the qualifications?
Required Experience Bachelor’s degree in Computer Science, Engineering, or a related field.
What skills are required?
Contract Management, Cross-functional, Computer Science, GitHub Actions, Infrastructure, Observability, Architecture, Performance, Innovation, Efficiency, and more.
OpenGov is hiring for this role. Visit career page
Pune, India
OpenGov is the leader in AI and ERP solutions for local and state governments in the U.S. More than 2,000 cities, counties, state agencies, school districts, and special districts rely on the OpenGov Public Service Platform to operate efficiently, adapt to change, and strengthen the public trust. Category-leading products include enterprise asset management, procurement and contract management, accounting and budgeting, billing and revenue management, permitting and licensing, and transparency and open data. These solutions come together in the OpenGov ERP, allowing public sector organizations to focus on priorities and deliver maximum ROI with every dollar and decision in sync. Learn about OpenGov’s mission to power more effective and accountable government and the vision of high-performance government for every community at OpenGov.com.
Job Summary
OpenGov is seeking a highly execution-oriented AI Application & Cloud Ops Engineer to design, build, and deploy internal AI-powered applications that improve how our teams operate. You will be among the first members of our Pune AI engineering team, with significant ownership and influence over how we build and scale AI systems globally.
This role focuses on developing intuitive front-end interfaces, building robust backend APIs, and productionizing AI workflows using Snowflake Snowpark Container Services (SPCS) and modern CI/CD practices that support agentic experiences.
You will work closely with AI, Data, and Enterprise Systems teams to transform in-house AI capabilities into usable, scalable applications. From internal web and Slack apps to production-grade AI-first copilots and AI-native interfaces, you will be responsible for shipping end-to-end solutions that are secure, observable, and maintainable.
This is a hands-on engineering role requiring strong full-stack fundamentals, disciplined deployment practices, and experience containerizing and operationalizing applications within a Snowflake-centric environment.
We are looking for a builder who can move quickly from data product to UI mock up to deployed solution, while maintaining production-grade standards.
Core Responsibilities
Design, architect, and develop internal AI-powered applications using modern frontend frameworks such as React or lightweight frameworks such as Streamlit.
Utilize AI coding best practices to build and maintain backend services and APIs (FastAPI, Flask, Node.js, or similar) to support AI-driven workflows and integrations.
Containerize applications and deploy them using Snowflake Snowpark Container Services (SPCS).
Design, implement, and maintain CI/CD pipelines using GitHub Actions or equivalent tooling.
Establish version control standards, branching strategies, automated testing workflows, and release processes for internal, fast-paced tools.
Deploy and manage multi-component applications, including operational data stores, APIs, orchestration services, and frontend clients.
Implement authentication, authorization, and secure access patterns across internal tools.
Integrate AI workflows into enterprise platforms such as Slack, Salesforce, and other SaaS systems.
Partner with AI Platform engineers to expose RAG pipelines, agent workflows, and data services through scalable APIs and interfaces.
Implement logging, monitoring, and observability best practices across deployed applications.
Optimize application performance, container efficiency, and deployment reliability.
Support rapid prototyping of internal tools while ensuring smooth transition from prototype to production.
Continuously improve deployment automation and reduce operational friction.
Required Experience
Bachelor’s degree in Computer Science, Engineering, or a related field.
7–10 years of experience in software engineering, platform engineering, or full-stack development roles.
Strong hands-on experience with frontend development using React or similar modern frameworks.
Experience building backend APIs using Python (FastAPI/Flask).
Hands-on experience containerizing applications using Docker.
Experience deploying applications using Snowflake Snowpark Container Services (SPCS).
Strong experience building and maintaining CI/CD pipelines (GitHub Actions preferred).
Strong understanding of Git workflows, version control, and release management best practices.
Experience building and deploying multi-service applications in cloud environments.
Understanding of secure authentication, secrets management, and enterprise access control.
Ability to independently own and deliver end-to-end application deployments.
Strong debugging and troubleshooting skills across frontend, backend, and infrastructure layers.
Nice to Have:
Experience building AI-powered internal tools, copilots, or enterprise dashboards.
Experience integrating with enterprise platforms such as Slack, Salesforce, Workday, Netsuite, or similar SaaS systems.
Familiarity with Snowflake data architecture and working with Snowflake-hosted APIs.
Experience implementing automated testing frameworks across frontend and backend systems.
Experience with observability tools or cloud-native monitoring platforms.
Understanding of multi-tenant SaaS deployment models.
Experience working in compliance-heavy or public sector environments.
Experience collaborating with cross-functional stakeholders and participating in agile delivery processes.
Why OpenGov?
A Mission That Matters.
At OpenGov, public service is personal. We are passionate about our mission to power more effective and accountable government. Government that operates efficiently, adapts to change, and strengthens public trust. Some people say this is boring. We think it’s the core of our democracy.
Opportunity to Innovate
The next great wave of innovation is unfolding with AI, and it will impact everything—from the way we work to the way governments interact with their residents. Join a trusted team with the passion, technology, and expertise to drive innovation and bring AI to local government. We’ve touched 2,000 communities so far, and we’re just getting started.
A Team of Passionate, Driven People
This isn’t your typical 9-to-5 job; we operate in a fast-paced, results-driven environment where impact matters more than simply clocking in and out. Our global team of 800+ employees is united in our commitment to challenge the status quo. OpenGov is headquartered in San Francisco and has offices in Atlanta, Boston, Buenos Aires, Chicago, Dubuque, Plano, and Pune.
A Place to Make Your Mark
We pride ourselves on our performance-based culture, where every employee is encouraged to jump in head-first and take action to help us improve. If you have a great idea, we want to hear it. Excellent performance is recognized and rewarded, and we love to promote from within.