Don't count Google out just yet
They were caught asleep at the wheel, but the AI revolution has only just begun
The narrative spreading through the tech industry right now is that Google is on the way out – yet another example of an incumbent that got too comfortable in their massive success and failed to innovate in the face of setbacks at the hands of smaller and more innovative competitors. Google Search is losing users to ChatGPT, and will have to kill their own golden goose to compete. Despite incredible technology and research capabilities, they “just can’t ship a product”.
I would certainly be worried if I was an executive at Google, but not as worried as the media would make you believe I should be. For all the talk, my sense is that Google is really well-positioned to take advantage of the current AI wave. Here’s why I think people are writing Google off way too early, and why they even may want to revisit their position.
Current pro-Google arguments
Let me first quickly go over the more obvious reasons Google is not doing that bad right now – exceptional talent, massive datasets, AI compute chip capabilities, practically infinite resources, and the fact that competitive pressures are likely to shake things up and force Google to start moving faster than it previously has.
Exceptional talent. Google AI and DeepMind (now just Google DeepMind) are two of the three best AI research labs in the world (guess the last one). The two labs have between each other contributed the Transformer, Visual Transformer, AlphaGo, AlphaFold, TensorFlow, JAX, the medical Q&A model Med-PaLM, the original discoveries of emergent behaviors, chain-of-thought prompting, as well as the current SotA scaling laws for LLMs.
Massive datasets. Google has decades of search data magnitudes greater than what their competitors have, not to mention YouTube’s video data which companies barely have started tapping into.
AI compute. There’s a decent case to be made that compute will be the biggest bottleneck for AI in the next decade. If this is the case, then Google is sitting comfortable: PaLM was trained on Google’s custom Pathways AI infrastructure with a compute budget of around 2560 zettaflops (2.56x10^24 FLOPs) over 64 days, for a throughput of 4x10^22 FLOPs per day. The GPT-3 paper doesn’t disclose training duration, but given estimates that GPT-3 took 1-2 months to train combined with the reported 314 zettaflops (10^21) used during training, we can deduce that their throughput was probably no higher than 1x10^22 FLOPs per day. PaLM came out later, but the gap here is still 4x in the best case.
Infinite resources. Google is sitting on over a hundred billion and is profitable, unlike some of their competitors.
Competitive pressure. Yes, Google was asleep at the wheel, and they lost the first battle to OpenAI. However, for a company that has been winning so much for so long while putting in so little effort, this was probably inevitable. It’s not that surprising that Google productized so little of its research – it had no reason to! Now that it has every reason to, it makes sense that its behavior will change moving forward.
These are all great reasons not to count Google out yet, but they are subsumed by a more important point – when it comes to AI, they are ridiculously well-diversified.
Google is an AI portfolio
Except for actually building products, Google + DeepMind is the leader in literally every subfield of AI. They have the best hardware, top-tier frameworks (JAX more so than TensorFlow nowadays), tied for best LLMs (yes GPT-4 beats PaLM but PaLM is 1 year old by now, let’s see what Google’s response looks like), hands-down best reinforcement learning, best AI for biotech, best self-driving cars, and best fundamental AI research (i.e. understanding how the models work, see previous point about Google DeepMind discovering emergence and chain-of-thought).
Many of these endeavors are likely to be fruitless. However, even a single win can produce astounding financial gain and reaffirm Google’s position in the market. It could be Waymo, Isomorphic Labs, Bard, or who knows what will come out next month. Google has multiple ongoing projects that are redefining the limits of humanity’s current technological capabilities, and although progress has been slow until now, it would be foolish to assume that the executives at Google won’t look to these products with a more determined eye now that competition is heating up.
If Google takes the current moment seriously enough, it could also pursue the productization of groundbreaking research that it has been sitting on but never done anything with – its Med-PaLM 2 model is the first LLM to pass medical exams and match human performance on medical Q&A, AlphaCode performs better than the medium participant in international coding competitions, and DreamerV3 is the closest humanity has today to an autonomous AI agent that can interact with an arbitrary environment and get any (sufficiently simple) task done.
Google might be fine even if none of these endeavors work out. After all, Microsoft and OpenAI have revealed that new opportunities have emerged in business productivity software and AI APIs. Google missed out on all of these so far, but first mover advantages can be fickle. Google was not the first search engine, just as Facebook wasn’t the first social media platform. In general, big tech companies have done well for themselves in the past decade by waiting for startups to invent new products and then copying them.
In fact, even if Google never ships another product for the rest of its existence, it can still benefit massively from the coming AI revolution. All it has to do is build a platform of AI tooling for other companies to build on top of because, believe it or not, AI is still an incredibly difficult space to build in. There are certainly some products that require little more than an LLM API, but for the truly ambitious companies in healthcare, law, finance, software development, etc., existing APIs are not yet sufficient. This means we will need to train more and better models with more data and more compute, which means more demand for infrastructure and AI expertise.
In short, Google has been the leader of AI for so long that it is sitting on the best talent, infrastructure, and technology of any company in the world by a massively wide margin. Google is currently losing the battle in one specific area of AI, but even then, it is not clear if this is due to an insurmountable disadvantage, whether it be OpenAI’s first-mover advantage and resulting data moat, their complete inability to execute, or something else, or if Bard will be able to catch up to ChatGPT given enough time. Overall, it seems almost impossible to imagine an AI-centric future where Google is not a significant player.
What should Google do?
Google executives and engineers should be ecstatic. Over the past several years, no company has spent as much as Google building up talent, infrastructure, and institutional knowledge around AI. Now, the final puzzle piece has arrived – a fast-moving challenger has arrived and revealed to the world that AI is even bigger than we previously anticipated. Every person and company wants to disrupt themselves, their competitors, and their industry with shiny new AI tooling.
And no company is in a better position to offer those tools than Google is today.
Though there certainly will be big winners, it seems rather improbable that AI will be winner-take-all. After all, there are so many subfields to disrupt – chatbots and search, biotech, art and entertainment, self-driving cars, business productivity software, etc. Google should acknowledge and rejoice over the massive opportunity it has ahead of itself. Don’t try to tackle it all. Rather, focus on your core strengths. Take on big bets that few other companies are capable of, wait for the dust to settle to see what products are worth investing in, and in the meantime, build the infrastructure which will allow the next generation of startups to succeed.
From the POV of someone building in the space, I think Google should start by building an AI hub similar to Hugging Face (or acquire them) and fight back against OpenAI by doing what their competitor once promised to do – go open-source. Google already has some of the best open-source models available today (Flan-T5 and UL2). These are currently significantly worse than GPT-3/4, but this doesn’t have to be the case. If Google built a platform around their slightly more capable models, say their 80B parameter Flamingo or Chinchilla, along with a hub of LLM tooling such as fine-tuning and prompt-tuning services powered by their TPU chips, my bet would be that the industry would be happy to invest in their platform over OpenAI’s given the existential discomfort that developers feel when dealing with mission-critical closed-source APIs, not to mention other significant advantages around data privacy and security.
Conclusion
Google is in a better position than people think. There are tons of opportunities in AI and it is well-positioned to play a role in every single one of them. It doesn’t need to know how to build products – it can either copy, acquire, or simply be an infrastructure provider for the next generation of startups.
More importantly, it would be a tragedy for the industry and for humanity if Google squandered this opportunity and died a slow death due to the launch of a chatbot. Google has incredible technology that could make the world a better place – we could have better and cheaper healthcare, legal services, software, and more.
AI isn’t magic, and change won’t happen for free. The technology is nascent and painfully difficult to build with. Many AI startups being born today are likely to fail not due to competition but due to the difficulty of the challenge they take on. My bet and my hope is that Google sees this and steps up to the opportunity, and helps guide the industry and the world towards safe, effective, and abundant AI.