Google Cloud networking is progressing thanks to several new developments that were recently unveiled at Google Next.
Executives from Google detailed in detail the hyperscaler’s most recent initiatives to improve cloud networking. Artificial intelligence (AI) was a prominent theme at Google Next this year, as it was at most events, but it wasn’t the only one. Even if AI on the cloud is becoming more and more common, there are still a ton of non-AI workloads there that stand to gain from better networking.
At Google Cloud Next 23, back in September, Google debuted its Cross-Cloud Network multi cloud capabilities. With the release of the Service-centric Cross-Cloud Network feature set, that endeavor is now being broadened.
Muninder Sambi, Google’s VP/GM of cloud networking, declared during a Next ’24 event that “the cross-cloud network is a new era of cloud networking.”
What is a cross-cloud that is service-centric?
Google Cloud has unveiled a new strategy called service-centric cross-cloud that makes it easier to connect to services in various settings.
“What Google Cloud has to offer to help simplify and make it easy for you to deploy any sort of cloud to any service in a straightforward, dependable, and secure manner is really pleasing to us,” Sambi stated. “The service-centric cross-cloud network we’re introducing allows you to connect to any workload, from on-premises assets, to any cloud, with ease, safety, and full management by Google Cloud. This includes AI/ML workloads, Vertex, or any best-of-breed GCP services, as well as on-premises assets.”
According to Sambi, companies may connect to services consistently no matter where they are because of the service-centric cross-cloud. He described it as being similar to an improved Virtual Private Cloud (VCP) tunnel for Private Service Connect (PSC), which enables DevOps to publish traffic management for workload rollouts of all kinds with ease.
Best practices for multi-cloud traffic management, such as cross-region failover, health checks, dynamic route propagation, and security, are included in the service.
The Gemini Cloud Assist network is enhanced with the help of artificial intelligence.
The large language model (LLM) of Google’s Gemini generative AI is being used for networking purposes.
A new tool called Gemini Cloud Assist uses domain expertise and LLM to enhance several parts of networking.
Mark Church, a Google product manager, stated during the Next ’24 event that “Gemini Cloud Assist is much more than a raw language model; the entire cloud lifecycle will be revolutionized, from network design to network operation and troubleshooting to network security and optimization.” Due to its ability to understand your project and environment, your context, and the security of the user’s IAM credentials through Gemini Cloud Assist, it has access to your context. It also knows your configuration, logs, and metrics.”
By assisting users in identifying problems and streamlining the troubleshooting process, Gemini Cloud Assist enhances networking. During the workshop, Church gave an example of how it may be used to detect pertinent trends, validate conclusions with raw data, and identify specific problems.
“The Gemini Cloud Assist project is still in its early stages, but there is already a lot of promise,” Church stated.
AI for networking—AI, AI, and more AI
Re-architecting networking is necessary to maximize the growing number of artificial intelligence (AI) applications that are operating in the cloud.
A number of changes have been described by Google Cloud engineering fellow Anna Berenberg to help enhance networking for genAI application deployments.
According to Berenberg, “an AI application is not a web application.” “They’re actually very different, even though they both have optimization goals for efficiency and maintenance in traffic management.”
According to her, AI applications differ greatly from standard web applications in that they are typically more deterministic in terms of processing time. Google Cloud is attempting to address the issue in part by introducing model-as-a-service endpoints, which handle inference as a service. Furthermore, Google has created traffic management features—like auto-routing, multi-modal affinity, and programmability via service extensions—that are ideal for AI workloads.
According to Berenberg, in the future, networking solutions will develop into AI-assisted systems that can support natural language, configure systems based on purpose, and optimize entire microservice stacks.
She stated, “We think that all products will support natural language and that networking products will become AI-assisted products.”