Detta inlägg post publicerades ursprungligen på denna sida this site ;
Date:
Author: Zena Assaad, Senior Lecturer, School of Engineering, Australian National University
Original article: https://theconversation.com/google-is-going-all-in-on-ai-its-part-of-a-troubling-trend-in-big-tech-257563
Google recently unveiled the next phase of its artificial intelligence (AI) journey: “AI mode”.
This new feature will soon be released as a new option to users of Google’s search engine in the United States, with no timeline yet for the rest of the world. The company says it will be akin to having a conversation with an expert well versed on a wide range of topics.
This is just one of many steps Google is taking in pursuit of its “all-in” approach to AI.
The “all-in” approach extends beyond just integrating the technology into different applications. Google is providing products all along the AI supply chain – a process known as “vertical integration” – housing everything from AI computer chips through to the user interfaces we interact with on a daily basis, such as Google maps or Gmail.
Google isn’t the only AI company with ambitions of vertical integration. For example, OpenAI recently acquired a hardware startup co-founded by Apple’s Jony Ive, which will centralise hardware development within the company. Amazon is taking similar steps. It owns cloud computing platforms, custom chips, device plans and is incorporating more AI services into products.
This may be the beginning of a trend of vertical integration across big tech. And it could have significant implications for users and companies alike.
The AI ‘tech stack’
Hardware, software, data sources, databases and servers are some of the layers that make up what is commonly referred to as the “AI tech stack”.
There are four main layers to Google’s evolving vertical tech stack:
1. Hardware layer. Google develops its own AI chips, known as tensor processing units (TPUs). The company claims these chips provide superior performance and efficiency compared to general purpose processors.
2. Infrastructure layer. The company uses its own cloud infrastructure to source its computing power, networking and storage requirements. This infrastructure is the foundation for running and scaling AI capabilities.
3. Model development layer. In-house research capabilities are used to drive the development of their products and services. This includes research around machine learning, robotics, language models and computer vision.
4. Data layer. Data is constantly sourced from users across all Google platforms, including its search engine, maps and email. Data collection is a condition of using any Google application.
Some argue vertical integration is an optimal and cost-effective business strategy in many industries, not just tech. However, the realities of this set-up prove otherwise.

RYO Alexandre/Shutterstock
Fuelling power imbalances
Google and OpenAI are two of just a handful of companies which dominate the global technology market.
Thanks to this market dominance, these companies can charge higher markups for their goods and services and abuse practices in online advertising.
Vertical integration further skews this power imbalance by centralising the layers of the AI tech stack to one company. A distribution of hardware, infrastructure, research and development and data across multiple industries helps support a more equitable playing field across the industry.
The loss of this equity creates greater barriers to entry for smaller companies as the larger conglomerates keep everything in-house.
It also reduces incentives to innovate in ways that benefit consumers because it eliminates the business competition that usually drives innovation.
Data is often described as the new gold. This is especially true in the case of AI, which is heavily reliant on data. Through its many platforms, Google has access to a continuous stream of data. In turn, this gives the company even more power in the industry.

ACHPF/Shutterstock
The vulnerabilities of vertical integration
The success of a company that is vertically integrated relies on housing the best knowledge and expertise in-house. Retaining this level of resourcing within a small handful of companies can lead to knowledge and expertise hoarding.
Research shows knowledge and expertise hoarding reduces social learning and increases disparities between “winners” and “losers” in a given market. This creates an overall vulnerable industry, because net gains are lost in the pursuit of exclusivity.
Exclusivity also breeds a lack of resilience. That’s because the points of failure are centralised.
Risk is better managed with additional oversight, transparency and accountability. Collaborations across industry rely on these processes to work together effectively.
Centralising the AI tech stack within one organisation eliminates external scrutiny, because it reduces interactions with external providers of products and services. In turn this can lead to a company behaving in a more risky manner.
Regulatory bodies can also provide external scrutiny.
However, the current push to deregulate AI is widening the gap between technology development and regulation.
It is also allowing for big tech companies to become increasingly opaque. A lack of transparency raises issues about organisational practices; in the context of AI, practices around data are of particular concern.
The trend towards vertical integration in the AI sector will further increase this opacity and heighten existing issues around transparency.
Zena Assaad does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.