Generative AI — Potential pitfalls, challenges, and risks
Key takeaways
The rise of generative artificial intelligence (AI) is similar to the development and advancement of other disruptive technologies — while the technology and infrastructure may be there, broader acceptance and scalability may still be lacking.
Generative AI has the potential to be broadly transformative, but there are many outstanding questions and concerns that need to be addressed before it is widely accepted on a larger scale.
As discussed in more detail below, potential challenges include labor market disruptions, government regulation, escalating costs and energy requirements, accuracy concerns, and geopolitical risks.
The rise of generative AI is similar to the development and advancement of other disruptive technologies — while the technology and infrastructure may be there, broader acceptance and scalability may still be lacking. The technology has the potential to be broadly transformative, but there are many outstanding questions and concerns that need to be addressed before it is widely accepted on a larger scale, evidenced by the increasing number of AI-related incidents (Chart 1). The AI Incident Database (AIID) tracks examples of ethical misuse of AI, including autonomous cars causing pedestrian fatalities or facial recognition systems leading to wrongful arrests.
Chart 1: Increase in the number of AI incidents from 2012 through 2023 Sources: Wells Fargo Investment Institute; AI, Incident Database (AIID), “The AI Index 2024 Annual Report,” April 2024. Data through year-end 2023. The AIID is a public database that tracks instances of ethical misuse of AI.
From a higher level, obstacles related to the development, advancement, and widespread acceptance of AI include the cost of developing effective models, challenges ensuring accurate outcomes, and potential impacts on the workforce. There are also geopolitical and regulatory risks and challenges, potential impacts on society, behavioral issues, and concerns around potential infringement of intellectual property (for example, copyright and trademark infringement).
While difficult to predict, we also expect that security will likely be an issue, tied to the threat of new generative-AI capabilities being used to breach unsecured networks. Consequently, cyberattacks and data breaches could accelerate if generative-AI models are not secure and are accessible to bad actors. As such, cybersecurity firms will rely on data sharing, partnerships, and machine learning to uncover behavioral patterns in efforts to protect the network and sensitive data.
Labor markets are among the areas most exposed to AI’s potentially disruptive effects, in tandem with the potential lift to productivity. Generative AI’s impact on the labor market likely will be more nuanced than more traditional AI systems. Earlier AI systems have been geared more toward automation that jeopardizes jobs in an array of labor-intensive services industries, from food services to office support to customer service and sales jobs, which are likely benefiting from more advanced generative AI as well. We expect generative AI’s disruptive effect on the labor market to mirror other forms of automation — as in the past, its impact likely will be mitigated over time by new occupations spawned by the innovations themselves. If this seems difficult to envision, a recent MIT study estimated that 60% of U.S. workers are now employed in occupations that did not exist 84 years ago.1 Adoption lags have steadily declined as technology has advanced, so we anticipate that new industries and new roles will emerge faster this time.2
Sectors most exposed to generative AI’s potential effects on the workplace include knowledge-based activities in financial services, such as trading and asset management, along with support workers in technology, professional services, and other knowledge-based industries. We see divergent impacts on individuals in different types of positions — for example, research project leaders who build and evaluate the models should find generative AI to be a valuable tool with the potential to increase the value of their work and their compensation. However, AI likely will reduce demand for professional-service support roles, such as research analysts or administrative assistant roles. This has already been observed, to some extent. For example, from 2012 to 2022, job growth for more cognitive-intensive positions (that is, information, financial, and business services) outpaced the overall total by 0.7%. However, the trend has already reversed with the introduction of AI — total nonfarm payroll growth outpaced more cognitive-oriented roles by 0.8% from July 2022 to July 2024.3 The unequal boost from generative AI, which is skewed toward knowledge-based workers with higher pay, risks aggravating income inequality and inviting legislation slowing AI’s absorption into the economy.
1 Peter Dizikes, “Most work is new work, long-term study of U.S. census data shows,” MIT News, April 1, 2024.
2 Ibid, Capital Economics.
3 Based on monthly nonfarm payrolls data from the Bureau of Labor Statistics.
Government regulation is a second, highly visible uncertainty in the outlook for generative AI development and acceptance. Thus far, initiatives have ranged from industry pledges of voluntary safety guardrails in the deployment of new products to Federal Trade Commission requests for companies’ details related to integration of AI into their operations. At least two bills working their way through Congress would a) enhance AI standards, accountability, and access to relevant technology and b) set voluntary standards on the use of AI technology. Also under discussion is bipartisan legislation directed at workforce training and education. The bills, targeted for passage during the lame-duck session of Congress, face competition from an expiring continuing resolution and other demands. Equally importantly, exposure to copyright challenges could potentially impede the development of generative AI’s large language models. The role of government regulation in addressing some of the risks discussed in this section continues to be debated. Governments around the world are becoming increasingly involved in the regulation of AI, based partly on concerns about societal impacts and data privacy.
As governments worldwide attempt to regulate AI, we think these efforts need to be balanced and measured appropriately to prevent any potential slowdown in the pace of innovation for generative AI going forward. According to Stanford University’s AI Index (Chart 2), the total number of AI-related bills passed into law in select countries has shown a noticeable increase from only one bill passing in 2016 to 28 passing in 2023.
Chart 2. Number of AI bills passed into law in select countries (2016 – 2023) Sources: Wells Fargo Investment Institute and Stanford University Institute for Human-Centered AI, “The AI Index 2024 Annual Report,” April 2024.
The U.S. government is currently lagging the European Union in its AI-related legislation, partly because of partisan divisions and partly because of its caution in not impeding the development of emerging national champions. Slow legal action by the federal government has left an opening for more state laws to be passed in 2023 addressing perceived threats from automated systems. These laws have included restrictions on the use of AI in political advertisements, general advertisements, gambling, hiring, and other activities.
The Biden Administration has been working to update and expand upon its 2023 executive order on safe, secure, and trustworthy AI. Last year, the Biden Administration brought together seven leading AI companies, and each made a voluntary commitment to act responsibly in properly managing the risks of the new technology. The companies’ pledges centered around making sure new AI-related products are safe before they are introduced to the public as well as investing in cybersecurity and proper safeguards to protect models — all working toward preventing harmful bias and discrimination and protecting privacy.4
Legislative action in the U.S. has shifted to individual states, including California and Colorado. On August 28, 2024, California’s Democrat-controlled legislature passed the bill SB 1047, or the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. The bill is focused on ensuring AI developers meet strict safety criteria before training their large language models. The bill penalizes AI developers for adverse consequences that lead to death or bodily harm to another human or harm to property. On September 29, 2024, California Governor Gavin Newsom vetoed the SB 1047 bill.
The European Union published the final AI Act, effective August 1, 2024. The EU’s AI Act is widely considered the most comprehensive piece of AI-related legislation. The AI Act buckets AI applications into three risk categories. The first risk category includes AI applications and systems that are deemed to create an unacceptable risk, such as government-run social scoring or manipulative AI. The second risk category includes high-risk applications, such as resume scanning tools that rank job applicants. The final risk category includes AI applications not banned or classified as a high-risk application, which are mostly left unregulated.
China’s regulatory framework concerning AI continues to evolve. Compared to the U.S., China has a more targeted and iterative approach to AI regulations. The first comprehensive set of finalized generative-AI rules, referred to as the Interim Measures for Managing Generative AI Services, became effective on August 15, 2023. On December 1, 2023, measures for review of scientific and technological ethics (including businesses engaged in AI) became effective.
4 The White House, “Fact Sheet: Biden-Harris Administration Secures Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI,” July 21, 2023.
Immediate issues in generative-AI implementation include both the computing power and cost of data-intensive model development. At the moment, the costs associated with developing, training, and managing generative-AI large language models effectively are fairly prohibitive as these models are very compute-, semiconductor-, networking-, and storage-intensive. The cost for training large language models is quite expensive, although less so for inference, which occurs when the already trained large language model is prompted for a response. Consequently, we believe there will be a significant increase in hardware demand, notably within the data-center environment, to accommodate the substantial increase in AI workloads. We do not believe that companies have reached the point of scaling the technology to be profitable. In our view, it may take a number of years to increase the operational efficiency of various large language models and decrease costs to a level more in-line with existing search engines. Elsewhere, data-center operators have begun to raise commercial lease rates to cover more limited computer capacity and added power costs of running energy-intensive AI-related workloads.
Power supply is key to operating a data center, so most facilities have redundancies in place to help avoid downtime. The causes of outages within data centers usually center around a few categories, including design; capacity issues; hardware failures (due to overheating or cooling issues); human error; environmental events; or even power disruptions. Power requirements for an AI-equipped data center have been compared to the power usage of a moderately sized city. As such, access to reliable, uninterrupted power is a requirement.
From a geographic perspective, data centers are fairly concentrated within the U.S. In fact, more than half of the data centers within the U.S. are located within a handful of states, including Virginia, California, Texas, Ohio, Illinois, and New York. Although data centers are currently located near major hubs around the country, we expect the coverage of data centers to expand geographically over time. Greenfield opportunities to expand the data-center footprint remain. Yet, as the opportunities to expand diminish, companies may revisit existing data-center locations to retrofit and upgrade hardware and infrastructure in support of the power and data-consumption needs of new AI technologies. A growing trend outside of the U.S. has been countries investing in sovereign AI, which refers to a nation’s ability to utilize generative AI supported by localized data-center infrastructure investments, proprietary data, and networks to protect its national security.
Given the amount of data points and information collected as more and more devices are connected to the cloud, businesses have become dependent on data centers. Revenue may therefore be lost as a result of an outage, or data-center operators may even be required to reimburse customers for lost revenue while an outage occurrs. Further, the data-center operator may be exposed to reputational risk in the event of an outage, possibly indicating a lack of adequate controls and measures to ensure security of information. Other potential types of risks or losses that could result from extended downtime include business interruption and lower productivity. Although eliminating outages altogether may be impossible, maintaining proper controls could help to mitigate the adverse impacts resulting from an outage.
Issues have arisen around ChatGPT and other large language models providing answers with conviction that turn out to be inaccurate. Since the large language models were trained on the structure of language, these models are more focused on the structure of which word or what concept comes next and less focused on whether the answer is accurate. The risks associated with hallucinations or inaccurate outcomes may be reduced with more high-quality parameters being entered into the large language model during the training phase — the model is only as useful as the quality of the data the model is using to produce an outcome.
Data quality is relevant because large language models are dependent on the quality and depth of the datasets they are utilizing to determine an output. As the saying goes: garbage in, garbage out. If the model is trained on stale or outdated data, the output will likely be poor and undependable as well. Consequently, it is vital to know how the model was trained and developed to better appreciate the outcomes the model produces. However, over time, we expect the accuracy of these large language models to improve as updated versions that incorporate a higher number of parameters manage to tweak the inaccuracies of prior versions.
Just a few years ago, models utilized millions of parameters when training their models. For example, the parameter count for the Bidirectional Encoder Representations from Transformers (BERT) large language model was 110 million in 2018. That figure has exploded, with the third version of OpenAI’s ChatGPT (ChatGPT-3) utilizing more than 175 billion parameters during its training phase while it is estimated that the fourth version (ChatGPT-4) utilizes more than 1 trillion parameters. This hypergrowth of variable usage has resulted in a much more complex process to provide an outcome to an AI-related query. It has also contributed to GPUs (graphics processing units) becoming one of the most important components used for training generative-AI-based large language models.
We have witnessed escalating trade tensions between China and the U.S. since 2018, and U.S. export controls for key semiconductor technology continue to escalate with the intent of limiting China’s ability to build out its own independent semiconductor ecosystem. The ratcheted export restrictions since 2022 have limited China’s access to critical next-generation semiconductor chips and equipment. However, this has led to China focusing its domestic investments in more localized companies at the expense of U.S.-based semiconductor suppliers.
Driven by concerns over national security, the U.S. restrictions, along with the Netherlands and Japan, have placed pressure on China’s ability to compete and thrive in generative AI. On September 6, the U.S. Department of Commerce introduced new export controls for critical technologies, including quantum computing and semiconductor goods. The export controls cover quantum computers and components, advanced chipmaking tools, software related to metals and metal alloy, and high-bandwidth memory semiconductor chips used in generative-AI applications.
Other recent developments indicate similar tensions. For example, on September 6, the Dutch government said it would expand export-licensing requirements to China for one semiconductor company’s ultraviolet (DUV) immersion lithography tools, in line with export restrictions imposed by the U.S. last year. Earlier in September, Bloomberg reported China threatened material economic retaliation against Japan should Japan align with the U.S. and further restrict sales of critical semiconductor capital equipment to Chinese companies. China could react to the new stricter export controls by hindering Japan’s access to critical minerals needed for automotive production. Additionally, last year, China imposed export restrictions targeting U.S. semiconductor companies for key minerals used in semiconductor and electric-vehicle production including gallium, germanium, and graphite. We believe the tense geopolitical environment between the U.S. and China as well as concerns over China’s adverse use of AI for military purposes will contribute to headline risk, resulting in potential share-price volatility for AI-related semiconductor equities.
Amit Chanda
Equity Sector Analyst, Information Technology
Gary Schlossberg
Global Strategist
Jennifer Timmerman
Investment Strategy Analyst
Tom Christopher
Equity Sector Analyst, Communication Services
All investments are subject to market risk, which means their value may fluctuate in response to general economic and market conditions, the prospects of individual companies, and industry sectors due to numerous factors, some of which may be unpredictable. Be sure you understand and are able to bear the associated market, liquidity, credit, yield fluctuation, and other risks involved in an investment in a particular strategy.
Equity securities are subject to market risk which means their value may fluctuate in response to general economic and market conditions and the perception of individual issuers. Investments in equity securities are generally more volatile than other types of securities. An investment that is concentrated in a specific sector or industry increases its vulnerability to any single economic, political or regulatory development affecting that sector or industry. This may result in greater price volatility.
Risks associated with the Technology sector include increased competition from domestic and international companies, unexpected changes in demand, regulatory actions, technical problems with key products, and the departure of key members of management. Technology and Internet-related stocks, especially smaller, less-seasoned companies, tend to be more volatile than the overall market.
An index is unmanaged and unavailable for direct investment.
Stanford University AI Index tracks, collates, distills, and visualizes data relating to artificial intelligence.
Global Securities Research (GSR) and Global Investment Strategy (GIS) are divisions of Wells Fargo Investment Institute, Inc. (WFII). WFII is a registered investment adviser and wholly owned subsidiary of Wells Fargo Bank, N.A., a bank affiliate of Wells Fargo & Company.
The information in this report was prepared by Global Securities Research (GSR). Opinions represent GSR’s opinion as of the date of this report and are for general information purposes only and are not intended to predict or guarantee the future performance of any individual security, market sector, or the markets generally. GSR does not undertake to advise you of any change in its opinions or the information contained in this report. Wells Fargo & Company affiliates may issue reports or have opinions that are inconsistent with, and reach different conclusions from, this report. Past performance is no guarantee of future results.
The information contained herein constitutes general information and is not directed to, designed for, or individually tailored to, any particular investor or potential investor. This report is not intended to be a client-specific suitability or best interest analysis or recommendation, an offer to participate in any investment, or a recommendation to buy, hold, or sell securities. Do not use this report as the sole basis for investment decisions. Do not select an asset class or investment product based on performance alone. Consider all relevant information, including your existing portfolio, investment objectives, risk tolerance, liquidity needs, and investment time horizon. The material contained herein has been prepared from sources and data we believe to be reliable, but we make no guarantee to its accuracy or completeness.
Global Securities Research works with information received from various resources including, but not limited to, research from affiliated and unaffiliated research correspondents as well as other sources. Global Securities Research does not assign ratings to or project target prices for any of the securities mentioned in this report.
Global Securities Research receives research from affiliated and unaffiliated correspondent research providers with which Wells Fargo Investment Institute has an agreement to obtain research reports. Each correspondent research report reflects the different assumptions, opinions, and the methods of the analysts who prepare them. Any opinions, prices, or estimates contained in this report are as of the date of this publication and are subject to change without notice.
Wells Fargo Advisors is registered with the U.S. Securities and Exchange Commission and the Financial Industry Regulatory Authority but is not licensed or registered with any financial services regulatory authority outside of the U.S. Non-U.S. residents who maintain U.S.-based financial services account(s) with Wells Fargo Advisors may not be afforded certain protections conferred by legislation and regulations in their country of residence in respect of any investments, investment transactions, or communications made with Wells Fargo Advisors.
Wells Fargo Advisors is a trade name used by Wells Fargo Clearing Services, LLC and Wells Fargo Advisors Financial Network, LLC, Members SIPC, separate registered broker-dealers and non-bank affiliates of Wells Fargo & Company.