To manage this IT complexity, Ryder Cup engaged technology partner HPE to create a central hub for its operations. The solution revolves around a platform where tournament staff can access data visualization supporting operational decision-making. This dashboard, which leveraged a high-performance network and Private cloud environmentcollect and deliver insights from diverse real-time data feeds.
This was a glimpse of what e-ready networking looks like at scale. A real-world stress test with implications for everything from incident management to enterprise operations. While model and data generation gets the lion’s share of boardroom attention and media hype, Networking There is a critical third step to a successful AI implementation, explains Jon Green, CTO of HPE Networking. “Disconnected AI doesn’t get you very far. You need a way to get data into it and out of it for both training and inference,” he says.
As businesses move toward distributed, real-time AI applications, tomorrow’s networks will need to analyze more massive amounts of information at lightning-fast speeds. What played out on the greens at Bethpage Black represents a lesson to be learned across industries: Customized networks are a make-or-break factor in turning the promise of AI into real-world performance.
Making a network ready for AI evaluation
More than half of organizations still struggle to keep their data pipelines running. In a recent HPE Cross-industry survey 45% of 1,775 IT leaders said they can push and pull real-time data to drive innovation. This is a significant change from last year’s numbers (justice 7% reported having such capabilities in 2024), but there is still work to be done to integrate data collection with real-time decision-making.
The network may hold the key to further narrowing this gap. Part of the solution will likely come in infrastructure design. Although traditional enterprise networks are engineered to handle the predictable flow of business applications—email, browsers, file sharing, etc. They are not designed to field the dynamic, high-volume data movement required by AI workloads. Inference in particular relies on shunting vast datasets between multiple GPUs with supercomputer-like precision.
“The ability to play fast and loose with a standard, off-the-shelf enterprise network,” Green says. “Few people will notice if an email platform is half a second slower than this.” But with AI transaction processing, the entire job is served by the final calculation. So if you’ve got any damage or congestion, it becomes really noticeable.”
Networks designed for AI should, therefore, operate with a different set of performance characteristics, including ultra-low latency, unlimited throughput, specialized equipment, and adaptability to scale. One of these differences is the distributed nature of AI, which affects the smooth flow of data.
The Ryder Cup was a practical demonstration of this new class of networking. During the event, a connected intelligence center was deployed to capture data from ticket scans, weather reports, GPS-enabled golf carts, concessions and merchandise sales, spectator and customer queues, and network performance. Additionally, 67 AI-enabled cameras were installed throughout the course. Input is analyzed through operational intelligence dashboards and provides staff with a quick view of activity across the entire base.