A new AI infrastructure startup founded by Zain Asgar, Michelle Nguyen, Omid Azizi, and Natalie Serrino is building a multi-silicon AI cloud designed to run AI workloads across CPUs, GPUs, and memory systems, rather than relying on a single chip architecture.
The platform partners with major chip companies including NVIDIA, AMD, Intel, Arm, Cerebras, and d-Matrix, allowing enterprises to run AI models on the most efficient hardware available at any given time. The result: 3x–10x faster inference at the same cost, a major advantage as AI workloads become more expensive and compute demand surges.
The company has raised $92 million to date from investors including Factory, Eclipse, Prosperity7 Ventures, and Triatomic Capital, positioning itself in the rapidly growing AI infrastructure market. With global data centre spending expected to reach $7 trillion in the coming years, the real opportunity lies not just in building more compute, but in optimising idle and underutilised compute.
Why this matters: Most AI infrastructure today is inefficient, with compute resources often sitting idle. Multi-silicon orchestration platforms aim to allocate workloads dynamically, improving performance while reducing costs a key requirement as AI adoption scales across industries.
Bottom line: The next phase of AI competition may not be about who has the most compute, but who uses compute the smartest and multi-silicon AI clouds could become a critical layer in the AI infrastructure stack.

