By:
Nana Appiah Acquaye
The
Group Technology and AI Executive at MTN Group, Charles Molapisi, has outlined
critical trends shaping the future of artificial intelligence infrastructure
following his participation at NVIDIA GTC 2026.
Reflecting
on discussions at the NVIDIA campus, Molapisi emphasized that access to energy
and infrastructure will be a defining factor in the global AI race, noting that
the ability to efficiently convert power into computational output at scale
will determine competitiveness.

He
highlighted the growing importance of workload optimization, pointing to the
need for organizations to strategically deploy different types of processors,
including GPUs, CPUs and emerging architectures, to achieve cost efficiency and
performance.
Molapisi
also pointed to a major architectural shift in NVIDIA’s roadmap, describing the
transition from the Blackwell platform to the Vera Rubin platform as a move
toward “AI factories,” with deeper integration of silicon photonics and new
storage layers designed to address memory constraints in large-scale AI models.
He
noted that the introduction of heterogeneous computing models, including
innovations such as language processing units, signals a departure from
GPU-only data center designs, enabling different chips to handle specialized
components of AI workloads.

Beyond
infrastructure, Molapisi identified agentic AI and large-scale inference as key
growth areas, while stressing the importance of clear governance frameworks to
manage the increasing complexity of AI-driven systems.
His
remarks underscore the evolving dynamics of AI deployment, as organizations and
countries position themselves to harness next-generation computing technologies
for economic and technological advancement.