GSMA Launches Open Telco AI to Build Telco-Grade Models and Tooling
Context and Chronology
The GSMA has created a collaborative platform called Open Telco AI to coordinate model development, datasets and compute targeted at telecom operators and vendors. The portal centralizes open telco models, curated telecom datasets and access to training and inference resources, and links to developer engagement programmes including a high‑participation troubleshooting challenge that drew over 1,000 registrations. GSMA positioned the project to address a measurable shortfall in operator-grade solutions: only 16% of GenAI deployments had been applied to network operations, a gap driving vendor and operator interest in specialized tooling. For registration and resources the initiative points users to GSMA.com/open-telco-ai, which will host the model library and leaderboard.
What was released
Founding contributors named in the launch include AT&T and AMD, with compute support routed through AMD hardware and partner TensorWave. The portal will publish multiple open-weight telco models from contributors, plus a library of knowledge graphs, embeddings and fine‑tuning datasets submitted by universities and industry groups. A public leaderboard will report performance across seven telecom-specific benchmarks, enabling repeatable evaluation and submission from local environments. Community activities — competitions, challenges and shared pipelines for synthetic data generation — are integral to the release plan and to seeding reproducible baselines.
Broader industry context and parallel efforts
Separately, an industry group anchored by NVIDIA has convened operators and vendors around a complementary agenda that places programmable edge compute, low-latency inference and orchestration primitives at the centre of next‑generation radio architectures. Participants publicly associated with that effort include Nokia, SoftBank, and T‑Mobile US, and its emphasis is on reference implementations that embed accelerators and telemetry pipelines into radio and edge stacks rather than on an open model library and benchmarks alone.
Strategic implications and synthesis
GSMA’s benchmark-led, model-and-data-centric approach and the NVIDIA‑anchored architecture consortium represent two industry responses to the same operational pressures: the need for reproducible model evaluation, faster operator trials, and cloud‑to‑edge inference. They can be complementary — GSMA’s leaderboards could supply the objective metrics that architecture initiatives use to select accelerators and orchestration patterns — but they also risk divergence: if benchmark definitions or datasets favour particular hardware profiles or telemetry interfaces, one track may lock in different de facto standards than the other. Technical and regulatory constraints — deterministic latency, spectrum safety, and auditability — will require staged field trials and joint lab validations before many operator deployments proceed.
For operators, the immediate choice is whether to participate in one or both tracks to influence benchmarks and reference implementations. For semiconductor and cloud partners, the split creates multiple avenues to capture value: by contributing validated models and compute (GSMA) or by supplying integrated accelerator-and-orchestration stacks (architecture consortia). Vendors that rely on opaque, closed stacks face growing pressure from both sides to demonstrate reproducible real-world performance against operator-focused metrics.
Read Our Expert Analysis
Create an account or login for free to unlock our expert analysis and key takeaways for this development.
By continuing, you agree to receive marketing communications and our weekly newsletter. You can opt-out at any time.
Recommended for you

SoftBank Corp. Pursues Telco AI Cloud to Become AI Infrastructure Provider
SoftBank introduced Telco AI Cloud, pairing a GPU cloud, AI‑RAN MEC and the Infrinia AI Cloud OS with the AITRAS orchestrator to push distributed, low‑latency inference across operator infrastructure. The initiative arrives alongside industry efforts — GSMA’s Open Telco AI (model/benchmark‑led) and an NVIDIA‑anchored accelerator/telemetry track — creating a live contest over whether operator AI stacks will be model‑centric or hardware/telemetry‑centric.

OpenAI teams with Tata to build large-scale AI data centres in India
OpenAI has entered a strategic collaboration with the Tata Group and Tata Consultancy Services to develop major AI-focused data centre capacity in India, starting with a 100 MW facility with scope to scale to 1 GW. The project implies multi‑billion dollar infrastructure spending and strengthens onshore compute options for AI deployment across Tata’s customer base.

Chinese tech firms ratchet up AI model launches, shifting the battleground from research to scale and distribution
Chinese technology companies are accelerating public releases of advanced generative and agent-capable models while pairing permissive access and low-cost distribution with platform hooks that convert usage into commerce. That commercial emphasis—backed by rising developer telemetry for non‑Western models and stronger upstream demand for specialized compute—reshapes competition around reach, infrastructure and governance rather than raw benchmark supremacy.
Commotion launches AI OS with NVIDIA Nemotron to operationalize enterprise AI
Commotion unveiled an AI OS built with NVIDIA Nemotron and backed by Tata Communications , aiming to turn copilots into governed, autonomous "AI Workers". Early deployments report 30–40% autonomous resolution , faster interactions, and enterprise-grade governance.
Cisco launches Silicon One G300 and liquid-cooled N9000/8000 systems to accelerate AI data centers
Cisco introduced the Silicon One G300 switching silicon and high‑density N9000/8000 platforms — with liquid‑cooled options, denser optics and unified fabric management — and paired the hardware roadmap with expanded AI governance, observability and automation capabilities to make large AI deployments more efficient and secure. The combined hardware and software push targets higher GPU utilization, shorter job times, energy savings and operational controls for AI agent and model risk in production.

Arcee AI unveils Trinity — a 400B-parameter Apache-licensed LLM aiming to reshape open-source AI
A small U.S. startup, Arcee AI, has released Trinity, a 400-billion-parameter foundation model under an Apache license and claims benchmark parity with leading open models. Trained in six months for $20M using 2,048 Nvidia Blackwell B300 GPUs, Trinity is text-only today with vision and speech plans and will be available in base, instruct, and unmodified ‘TrueBase’ flavors plus a hosted API coming soon.
Mozilla Mobilizes an Open-Source Coalition to Counter Big-Model Dominance
Mozilla is deploying its financial reserves and a nascent venture arm to assemble startups and technologists aiming to promote transparent, open approaches to AI. The effort is pitched as a long-term strategic play to create an alternative ecosystem that can compete with heavily funded incumbents such as OpenAI and Anthropic.

NVIDIA-led consortium targets AI-native 6G architecture
A consortium led by NVIDIA and several carriers aims to bake intelligence into 6G network design, shifting radio control toward software and specialized accelerators. This move accelerates demand for telco-grade AI silicon, cloud-edge orchestration, and standards influence that could reshape procurement and vendor leverage.