NA
QualificationsAbout WWT
World Wide Technology (WWT) is a global technology solutions provider with over US$20 billion in annual revenue, 10,000+ employees, and operations across the Americas, EMEA, and APAC. WWT combines strategy, execution, and partnership to help the world’s largest organisations adopt transformative technology — from cloud and security to AI and advanced infrastructure.
WWT’s AI Infrastructure Practice designs, deploys, and operates the physical and logical foundations that make enterprise AI possible — GPU compute clusters, high-performance networking, storage architectures, power and cooling systems, and the platforms that tie them together. As eleven-time NVIDIA Partner of the Year for AI and Deep Learning, WWT brings unmatched depth in AI infrastructure at enterprise scale.
The Opportunity
Most AI infrastructure failures happen at the seams — where compute meets network, where storage meets platform, where facilities meet hardware. You will be the integrator who prevents this.
As an End-to-End AI Infrastructure Architect, you bring together compute, networking, storage, facilities, platform, and security into coherent architectures that actually work when thousands of GPUs need to train a model together. You will be the named Design Authority on WWT’s most complex AI infrastructure programmes, ensuring every component works as a system.
What You’ll Do
• Integrate compute, networking, storage, facilities, and platform layers into unified AI infrastructure architectures
• Serve as named Design Authority on major AI infrastructure programmes — owning the architectural integrity end-to-end
• Map AI infrastructure architectures to client business outcomes: cost-per-training-run, time-to-inference, utilisation efficiency
• Evaluate and recommend vendor-neutral component stacks: GPU compute (NVIDIA, AMD), networking, storage, and platform
• Design reference architectures for common AI infrastructure patterns: training clusters, inference farms, hybrid training/inference
• Coordinate between domain specialists (networking, storage, MEP, platform) to resolve cross-domain design conflicts
• Conduct architectural reviews and risk assessments at programme milestones
• Present architecture decisions and trade-offs to client CTO/VP-level stakeholders
What We’re Looking For
Must-Haves
• 15+ years in infrastructure and solution architecture, spanning compute, networking, and storage
• 3–5+ years designing AI/ML or HPC infrastructure platforms at scale
• TOGAF, SABSA, or equivalent architecture framework experience
• Vendor-neutral evaluation capability across GPU compute, networking, storage, and platform vendors
• Client-facing design authority experience on programmes of significant scale
• Systems-level thinking — ability to understand how component interactions affect overall system performance
Nice-to-Haves
• Experience with NVIDIA DGX/HGX SuperPOD reference architectures
• Understanding of AI platform software: Kubernetes, Slurm, Base Command Manager
• Knowledge of TCO modelling for AI infrastructure (build vs colo vs cloud)
• Experience with air-gapped or sovereign AI infrastructure deployments
Why WWT
• Join a US$20B+ global technology leader with startup energy in the AI Infrastructure practice
• Work on the most advanced AI infrastructure deployments in EMEA — GPU superclusters, liquid cooling, InfiniBand fabrics
• Direct access to NVIDIA, HPE, Dell, Cisco, and leading AI infrastructure vendors as strategic partners
• Enterprise-scale projects with Fortune 500 and regulated industry clients
• Remote-first flexibility with travel to client sites and data centres
• Competitive compensation, benefits, and professional development