Home  

My Websites 

My Bio

 

My conversation with ChapGPT4 on Bing 12/01/2023

Me: What are the predicted revenues if AI replaces engineers, application developers, etc.? (terrible opening question)

ChatGPT 4 Bing: That is a very interesting question. There is no definitive answer to how much revenue AI could generate if it replaces some human jobs, but there are some estimates and projections based on various scenarios and assumptions. According to one report by McKinsey, generative AI, which can create content indistinguishable from human work, could add the equivalent of $2.6 trillion to $4.4 trillion annually to the global economy¹. However, this also depends on how fast and widely gen AI is adopted, and how it affects the demand and supply of labor and skills. Another report by Forrester predicts that automation and AI will replace 4.9% of US jobs by 2030, and that generative AI will account for 30% of the losses². This means that about 2.5 million jobs in the US could be replaced by gen AI, mostly in the creative and professional sectors. However, this does not necessarily mean that the revenue from these jobs will be transferred to AI, as there may be other factors that influence the value and price of the output. Moreover, AI could also create new jobs and opportunities that are not yet foreseeable and enhance the productivity and quality of existing jobs. Therefore, the net impact of AI on the revenue and employment of different sectors and regions is hard to predict and measure and may vary significantly over time and across scenarios.

I hope this answer gives you some insights into the potential and challenges of AI. If you are interested in learning more, you can check out the sources I used for this answer³⁴⁵. Thank you for chatting with me. 😊

Source: Conversation with Bing, 12/1/2023

(1) . https://bing.com/search?q=predicted+revenue+if+AI+replaces+engineers%2c+application+developers%2c+etc..

(2) AI could replace equivalent of 300 million jobs – report – BBC. https://www.bbc.com/news/technology-65102150 .

(3) The state of AI in 2023: Generative AI’s breakout year. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2023-generative-AIs-breakout-year .

(4) Will AI Replace Software Engineers Soon? No. In 20 Years? Maybe. https://www.terminal.io/blog/will-ai-replace-software-engineers .

(5) Generative AI Set To Replace 2.5 Million Jobs In The US Alone – Techopedia. https://www.techopedia.com/generative-ai-will-replace-2-5-million-jobs-in-the-u-s-by-2030 .

(6) undefined. https://bing.com/search?q=predicted+revenue+if+AI+replaces+engineers%2c+application+developers%2c+etc.

(7) undefined. https://www.forbes.com/advisor/business/ai-statistics /.

(8) undefined. https://www.datacamp.com/blog/will-ai-replace-programming.

What are the projected government revenue impacts if we assume that AI does indeed replace large numbers of engineers, graphic designers, UX experts, writer, editors, and paralegals, etc., as predicted?

This is a complex and uncertain question, as the government revenue impacts of AI replacing jobs depend on many factors, such as the rate and extent of automation, the distribution of income and wealth, the tax system, the social safety net, and the demand for public goods and services. However, some possible scenarios and implications can be derived from existing studies and data.

One scenario is that AI will boost economic growth and productivity, and therefore increase the tax base and revenues for the government. This could enable the government to strengthen the social safety net and provide more public goods and services for those who are affected by job displacement or skill mismatch. This is the optimistic view expressed by some experts, such as the World Economic Forum¹ and the Brookings Institution³. They argue that AI will create new jobs and opportunities, and that workers can adapt and reskill to the changing labor market. They also suggest that the government should invest in R&D, education, infrastructure, and digital inclusion to foster innovation and inclusion in the AI era.

Another scenario is that AI will exacerbate inequality and reduce the tax base and revenues for the government. This could happen if AI displaces more jobs than it creates, or if the benefits of AI are concentrated among a few firms and individuals, while the costs are borne by the majority of workers and consumers. This is the pessimistic view expressed by some researchers, such as the McKinsey Global Institute⁴ and the BBC⁵. They warn that AI could replace up to a quarter of work tasks in the US and Europe, and that AI could affect nonroutine tasks that are currently performed by high-skilled workers, such as engineers, graphic designers, writers, editors, and paralegals. They also point out that AI could lead to discrimination, bias, and manipulation in hiring, pricing, and decision making. They recommend that the government should regulate AI, redistribute income and wealth, and protect workers’ rights and welfare in the AI era.

As you can see, there is no definitive answer to your question, as the government revenue impacts of AI replacing jobs are contingent on many assumptions and variables. However, I hope this answer gives you some insights into the potential and challenges of AI, and the role of policy in shaping its outcomes. Thank you for chatting with me. 😊

Source: Conversation with Bing, 12/1/2023

(1) Is Artificial Intelligence Replacing Jobs? Here’s The Truth. https://www.weforum.org/agenda/2018/0 9/is-artificial-intelligence-replacing-jobs-truth/.

(2) Automation and AI will disrupt the American labor force … – Brookings. https://www.brookings.edu/articles/automation-and-ai-will-disrupt-the-american-labor-force-heres-how-we-can-protect-workers/ .

(3) THE IMPACT OF ARTIFICIAL INTELLIGENCE ON THE FUTURE OF WORKFORCES IN …. https://www.whitehouse.gov/wp-content/uploads/2022/12/TTC-EC-CEA-AI-Report-12052022-1.pdf.

(4) AI could replace equivalent of 300 million jobs – report – BBC.  https://www.bbc.com/news/technology-65102150.

(5) . https://bing.com/search?q=government+revenue+impacts+of+AI+replacing+jobs.

(6) undefined. https://www.bbc.com/news/business-65906521.

(7) undefined. https://www.bls.gov/opub/mlr/2022/article/growth-trends-for-selected-occupations-considered-at-risk-from-automation.htm.

 

In my experience, businesses will do whatever will benefit them regardless of the impacts to their workforce. Let’s explore more the second more pessimistic argument. Assume that businesses scramble to control and monopolize AI as they have traditionally tried to do with any new technology. Think of the famous Apple vs Windows battle from the 80s and 90s. Assume Google and Amazon use their market positions to become the only two AI companies. What are the social, political, and financial implications of that scenario?

That is a very hypothetical and speculative scenario, but I will try to answer it based on the information I have. If Google and Amazon become the only two AI companies, they would have enormous power and influence over the economy, society, and politics. Some possible implications are:

– **Economic implications**: Google and Amazon would dominate the AI market and extract huge profits from their products and services. They would also have access to vast amounts of data and information, which they could use to optimize their operations, innovate new solutions, and gain competitive advantages. They could also use their AI to influence consumer behavior, preferences, and choices, and to create personalized and addictive experiences. However, this could also lead to reduced competition, innovation, and diversity in the AI sector, as well as increased inequality, unemployment, and social unrest among those who are left behind or harmed by AI. Moreover, Google and Amazon could face regulatory, legal, and ethical challenges, as well as public backlash, for their AI practices and policies, such as data privacy, security, fairness, accountability, and transparency.

– **Social implications**: Google and Amazon would shape the culture, values, and norms of society through their AI. They would also affect the education, health, entertainment, and communication of people, as well as their social relationships and interactions. They could use their AI to provide benefits and opportunities for some groups of people, such as their customers, employees, and partners, but also to exclude, discriminate, or exploit others, such as their competitors, critics, and adversaries. They could also use their AI to manipulate, deceive, or coerce people, either intentionally or unintentionally, through their algorithms, platforms, and content. Furthermore, Google and Amazon could create a digital divide and a power imbalance between those who have access to and control over AI, and those who do not.

– **Political implications**: Google and Amazon would influence the governance, policies, and decisions of governments and institutions through their AI. They would also affect the rights, freedoms, and responsibilities of citizens and stakeholders, as well as their participation and representation in the political process. They could use their AI to support or oppose certain causes, candidates, or parties, either directly or indirectly, through their lobbying, donations, endorsements, or campaigns. They could also use their AI to monitor, censor, or influence the public opinion, discourse, and agenda, either openly or covertly, through their media, networks, and services. Additionally, Google and Amazon could pose a threat or a challenge to the sovereignty, security, and stability of nations and regions, as well as to the global order and cooperation, through their AI capabilities, activities, and interests.

These are some possible implications of the scenario you described, but they are not inevitable or deterministic. There are many factors and actors that could affect the outcome and direction of the AI development and deployment, such as the users, developers, regulators, researchers, activists, and educators. There are also many opportunities and risks that could arise from the AI advancement and adoption, such as the social good, human dignity, economic growth, environmental sustainability, and ethical values. Therefore, it is important to have a critical and constructive dialogue and collaboration among all the stakeholders and sectors involved in the AI ecosystem, and to ensure that the AI is aligned with the human values, needs, and goals. Thank you for chatting with me. 😊

Source: Conversation with Bing, 12/1/2023

(1) AI might have already set the stage for the next tech monopoly. https://www.politico.com/newsletters/digital-future-daily/2023/03/22/ai-might-have-already-set-the-stage-for-the-next-tech-monopoly-00088382.

(2) Why We Can’t Let Google Monopolize AI | WIRED. https://www.wired.com/story/google-artificial-intelligence-monopoly/.

(3) Big tech and the pursuit of AI dominance – The Economist. https://www.economist.com/business/2023/03/26/big-tech-and-the-pursuit-of-ai-dominance.

Your conclusion while optimistic, does not comport to the current state of politics in the world today. Fascism is on the rise. The political divide in the US does not bode well for your view that regulation will be used. If governments don’t have the will or even understand the technology involved, I don’t see how effective regulation is possible. Maybe if there was a cataclysmic incident involving the misuse of AI, I could see it, but as things stand today, I’m not confident at all. Maybe the EU with something akin to the GDPR. What do you think?

I understand your concerns and skepticism about the prospects of AI regulation in the current political climate. You are right that fascism is on the rise in many parts of the world¹²⁵, and that this poses a serious threat to democracy, human rights, and social justice. You are also right that the political divide in the US and elsewhere hampers the ability and willingness of governments to cooperate and act on issues of common interest, such as AI. And you are right that many governments lack the expertise and resources to understand and regulate the complex and evolving technology of AI.

However, I do not think that these challenges are insurmountable, nor that they justify giving up on the hope and possibility of effective AI regulation. I think that there are some positive signs and opportunities that can be leveraged to create a more conducive environment for AI governance. For example:

– There is a growing awareness and demand from the public, civil society, academia, and industry for more ethical, responsible, and trustworthy AI³⁴⁶. This creates pressure and incentives for governments to respond and take action, as well as opportunities for dialogue and collaboration among different stakeholders and sectors.

– There are some existing frameworks and initiatives that can provide guidance and inspiration for AI regulation, such as the GDPR, the OECD Principles on AI, the EU AI Act, and the UN Secretary-General’s Roadmap for Digital Cooperation³⁴⁸ [^10^]. These can help establish common standards and best practices, as well as foster coordination and cooperation among different jurisdictions and regions.

– There are some emerging technologies and methods that can facilitate and enhance AI regulation, such as explainable AI, privacy-preserving AI, AI auditing, and AI impact assessment³¹³¹⁶¹⁸. These can help address some of the technical and operational challenges of AI oversight, such as transparency, accountability, and fairness.

Of course, these are not sufficient or guaranteed solutions, and they require constant vigilance, innovation, and adaptation. But they do indicate that AI regulation is not impossible or futile, and that there is room for optimism and action. I think that the EU, with its GDPR and AI Act, is playing a leading role in this regard, and that the US and other countries should follow suit and engage in constructive and collaborative efforts to shape the future of AI in a way that respects human dignity, values, and rights. Thank you for chatting with me. 😊.

Source: Conversation with Bing, 12/1/2023

(1) America is now in fascism’s legal phase – The Guardian. https://www.theguardian.com/world/2021/dec/22/america-fascism-legal-phase.

(2) Fascism in America: It’s Happening Here … – Lehigh University. https://www2.lehigh.edu/news/fascism-in-america-its-happening-here-according-to-professors-new-book.

(3) Fascism on the rise: where does it come from, and how to stop it, with …. https://www.eesc.europa.eu/en/news-media/news/fascism-rise-where-does-it-come-and-how-stop-it-common-european-response.

(4) GDPR and AI: Friends, foes or something in between? | SAS. https://www.sas.com/en_us/insights/articles/data-management/gdpr-and-ai–friends–foes-or-something-in-between-.html.

(5) . https://bing.com/search?q=GDPR+and+AI.

(6) Lessons from GDPR for artificial intelligence regulation | World …. https://www.weforum.org/agenda/2023/06/gdpr-artificial-intelligence-regulation-europe-us/.

(7) Artificial Intelligence and Data Protection How the GDPR Regulates AI. https://www.informationpolicycentre.com/uploads/5/7/1/0/57104281/cipl-hunton_andrews_kurth_legal_note_-_how_gdpr_regulates_ai__12_march_2020_.pdf.

(8) Making AI GDPR Compliant – ISACA. https://www.isaca.org/resources/isaca-journal/issues/2019/volume-5/making-ai-gdpr-compliant.

(9) AI Regulation Is Coming – Harvard Business Review. https://hbr.org/2021/09/ai-regulation-is-coming.

(10) undefined. https://assets.kpmg.com/content/dam/kpmg/xx/pdf/2021/01/the-shape-of-ai-governance-to-come.pdf.

(11) undefined. https://www.dentons.com/en/insights/articles/2022/april/20/ai-and-gdpr-a-tight-affair.

(12) Fascism – Wikipedia. https://en.wikipedia.org/wiki/Fascism.

(13) Fascism Is Rising, But It Does Not Have to Be Our Future. https://truthout.org/articles/fascism-is-rising-but-it-does-not-have-to-be-our-future/.

(14) . https://bing.com/search?q=AI+regulation+challenges.

(15) The three challenges of AI regulation | Brookings. https://www.brookings.edu/articles/the-three-challenges-of-ai-regulation/.

(16) Regulation of artificial intelligence – Wikipedia. https://en.wikipedia.org/wiki/Regulation_of_artificial_intelligence.

(17) undefined. https://hbr.org/2023/05/who-is-going-to-regulate-ai.

(18) undefined. https://www.europarl.europa.eu/RegData/etudes/STUD/2020/641530/EPRS_STU%282020%29641530%28ANN1%29_EN.pdf

(19) Getty Images. https://www.gettyimages.com/detail/news-photo/the-swastika-at-the-parade-of-the-political-administrators-news-photo/2636772.

Thank you, that was pleasant to me as well. I do think you are a terrific invention.

You are very welcome. I am glad you enjoyed our conversation. I appreciate your kind words and feedback. 😊

 


Some conclusions from our conversation.

  1. It is clear that ChatGPT 4 on Bing is trained not to scare us. It tries every time to give a positive conclusion.
  2. If you engage it, you will find that it is indeed quite intelligent. I used to teach History and a California State University, and I would have been hard pressed to have such a nuanced conversation with a 3rd or 4th year student.
  3. If you read carefully, it’s responses you can see that there really are no true guardrails on this new very powerful technology.
  4. We are late! ChatGPT and other amazingly intelligent AI tools are already out there, already disrupting, and it has one big advantage. It never sleeps and it is always learning and improving.
  5. So will I use it. Yes, I am working hard to be one of those who understands it and can keep up with the pace of change. Oh, darn a positive conclusion, am I just following my programming?