Your mission
The Hashgraph Group (THG) is a global organization headquartered in Switzerland, and is a part of the Hedera Hashgraph (“Hedera”) ecosystem.
Hedera is a revolutionary proof-of-stake public Distributed Ledger Technology (DLT) network that is fast emerging as the gold standard in DLT for enterprise-grade solutions and decentralized applications (dApps). Hedera is governed by a council of the world’s leading organizations - which include Google, Boeing, IBM, Dell, Deutsche Telekom, LG, Abrdn, London School of Economics, to name a few.
THG works closely with enterprises, startups, governments, and academic and training institutions around the world to deliver financing, custom-design solutions, and professional training and innovation programs, aimed at accelerating the development and utilization of the Hedera Hashgraph network.
Your profile
We are looking for an Artificial Intelligence Lead to own, design, and scale AI capabilities across the organization. This role blends hands-on technical leadership with strategic thinking, enabling AI to directly solve business problems and power next-generation products.
You will lead AI architecture decisions, mentor a high-performing team, and work closely with Product, Engineering, and Business stakeholders to turn AI into real-world impact.
Key Responsibilities
Define and execute the AI roadmap aligned with business and product goals
Identify high-impact AI/ML use cases across product, operations, and growth
Stay ahead of trends in Generative AI, LLMs, agents, and applied ML
Architecture & Development
Design scalable AI/ML architectures (training, inference, pipelines, MLOps)
Build and deploy models using Python, PyTorch, TensorFlow, Hugging Face, OpenAI-style APIs
Lead development of LLM-based systems (RAG, agents, fine-tuning, prompt engineering)
Ensure model performance, security, fairness, and explainability
Leadership & Collaboration
Lead, mentor, and grow a team of AI/ML engineers and data scientists
Partner closely with Product Managers to translate problems into AI solutions
Collaborate with backend, frontend, and data engineering teams
Production & Scale
Own model deployment, monitoring, retraining, and optimization
Implement best-in-class MLOps practices (CI/CD, observability, governance)
Optimize inference cost, latency, and scalability in production environments