Home  >   Blog  >   Three Perspectives on Large Language Models and Their Emerging Applications

2023-04-07

Three Perspectives on Large Language Models and Their Emerging Applications

  Support by donation   Gift a cup of coffee


This article is part of the series: How to Talk About AI

As an engineer working on machine learning and data science for several years, now is indeed an interesting moment to see the impact of large language models (LLMs) such as OpenAI's GPT. Overall, I'm personally optimistic about the current situation and the near future, where a lot of simple work can be supported by the machines1 (say ChatGPT, for example), and we can alternatively invest in the further possibilities unlocked by the applications. Yet it is not a complete substitute for human labor as the models' behavior is bounded by their training data and algorithms; that is, as long as our goal is to create something for humans to consume, rather than sitting back and watching machine-to-machine conversations, it is us who must understand the limitations, tweak the inputs, and assess the outputs.

Here, I'd like to share three perspectives I have built over the last few weeks thanks to the meaningful discussions I have had with my colleagues, friends, and mentees.

Perspective #1: LLMs' Sci-Fi-like role. One of my colleagues with little-to-no software engineering expertise shared the excitement about an emerging AI technology and its chat completion feature. To reveal where the excitement comes from, I pointed out the significance of its performance and accessible interface; even though most underlying concepts such as conversational AI, text summarization, multi-modal learning, and vector embedding aren't new, making the capabilities accessible at scale as a natural language-based chatbot has made the whole story different2. For years, I spent a hard time convincing stakeholders about the idea and impact of machine learning algorithms. But now, the word "ChatGPT" explains everything, and it is simply surprising to see how easy the recent conversations go in professional settings. Hence, I see the role of these AI-driven applications as similar to Sci-Fi, in way that intuitive stories make ordinary people easier to imagine possible futures and "what if" scenarios. That is, our perception of the world has changed dramatically regardless of the scientific details, which paradoxically surfaces the importance of being mindful of underlying data and algorithms in the real world.

Perspective #2: Questionable power dynamics. Speaking of data and algorithms, one of my friends who is familiar with machine learning technologies brought up their frustrations on seeing big corporate's dominating power and unusual use of massive computing/human/financial resources making today's LLMs possible, which yield black-boxed proprietary models3. I generally agree with the statement that LLMs would simply help advantaged people to become wealthier and take many opportunities away from the marginalized population at a faster rate4. I am deeply concerned about this circumstance. However, I doubt the companies are "ignoring" these issues. For example, the researchers discussed the potential risk of exposing model details in anticipation of its malicious use by bad actors5. By reading a few of these articles, it's clear to me that OpenAI is paying special attention to how to communicate with society and be responsible in the long run. Thus, although criticizing one side over the other is easy, I believe it is more important for each of us to take responsibility in our own way and seek a path to coordinate a healthier society as a whole. In this context, there is a suggestion for building publicly-owned LLMs, for instance6.

Perspective #3: After all, is it okay to use the LLMs? My mentees are generally new to software development, and they immediately found AI-enabled tools like ChatGPT and GitHub Copilot helpful for their coding work. However, at the same time, they told me that they feel bad about using these automation tools as it looks like cheating. Plus, they are scared about a potential loss of job opportunities because of these AIs. My personal opinion is that, as long as you understand its potential risks and limitations7, it is okay to use them as a tool. And, even if "how we work" changed due to the breakthrough, our job does remain in a broader context of software development, as far as stakeholders are human-being. Think about the use of software libraries: today, people won't build web applications from scratch with vanilla JavaScript, and they most likely use proprietary or open-source libraries like React and Vue instead. Here, since there are underlying risks (e.g., security vulnerabilities and bugs) and limitations (e.g., you cannot implement every possible animation on the web by a pre-defined function; there has always been a balance between ready-made vs. customized solution), the use of code written by someone else requires certain knowledge, experience, and ethics; no matter how sophisticated the tools are and how our way of coding changes, the essentials remain the same, and it is always our responsibility to use them with care. In fact, I worked on web development 10+ years ago for the first time and less than a year ago most recently with a few years of gap, and, even though everything has changed in the last decade, my thought process and step-by-step approach to building a deliverable were the same and did work. Therefore, it is normal to adapt to the new tools for making our work more efficient and impactful, but we must keep ourselves updated with fundamental literacy and what it means in the current technological trends.

Apart from everything I discussed above, there is an essential question for me to answer: Do I want to keep working in this field? The exponential growth of and industry's adaptation to the LLMs seems to be unstoppable, and many topics I used to spend a significant amount of effort on become extremely accessible to the larger population, which is great. Meanwhile, even though I believe there are still a lot of areas I can contribute to in a meaningful way, they don't necessarily resonate with my mission and values. For a similar reason, I stopped pursuing a web development career after the rise of Virtual DOM. To me, it was a paradigm shift that unlocked new possibilities, changed how we build applications, commoditized traditional technologies, and, inversely, made all the things "boring"; I literally felt I'm done on this kind of job, and I slowly started working on something different. Notice that the experiences do remain, and I could easily catch up and work on modern web development less than a year ago without any struggles. But I am simply not into the particular technology anymore. Hence, I wouldn't be shocked if I saw this kind of transition in my inner self anytime soon.

1. The capitalists' redundancy on un(der)paid labor is highlighted in Invisibility by Design, for example, with a case from Japan's rising digital economy in the 1990s-2000s. The study surfaces the gap between capitalists and laborers, and we can easily see the same imbalance today at a global scale. It would be great if AI tools could intervene in this unfair relationship in a positive way.
2. While ChatGPT is making a huge buzz lately, the features of its API endpoints, OpenAI API, offer to make it clear that GPT is a multi-modal LLM doing a surprisingly better job on a collection of conventional machine-learning tasks. Double-clicking the technology and design choices behind the buzz and separating what's truly novel from what isn't is the crucial first step to assessing the tool.
3. To give an example, see GPT-4 Technical Report, which "contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar" (Chapter 2).
4. In Atlas of AI, AI researcher Kate Crawford sheds light on crucial aspects of emerging AI technology that many wealthy people, including developers themselves, have (knowingly) overlooked: labor-intensive underlying data work, military use of technology, biases embedded to its classification systems. They all relate to the history of extraction and exploitation from those who have been disempowered, discriminated against, and harmed by the technologies. And the gap simply expands more rapidly if we keep deploying the new technology as much as we want.
5. In The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, the scholars suggested the importance of exploring different openness models when it comes to AI development. This possibly means working with security specialists and being careful to disclose model details to minimize the risk of their malicious use.
6. A correspondence on Nature Machine Intelligence, Large Language Models Challenge the Future of Higher Education, discussed the possibility of creating publicly funded LLMs in a stakeholder-led, open setting.
7. Assessing risks and limitations without knowing model details can be difficult, but for starters, there are numerous reference points online that debate reliability3, vulnerability5, and content moderation.

This article is part of the series: How to Talk About AI

  Share


  Categories

Business Data Science & Analytics Machine Learning

  See also

2023-03-30
How Information Flows: From Field Studies to Risk Mitigation
2022-03-13
Security, Privacy, and Ethics in the Web 3.0 Era
2020-02-07
Why a Data Science Engineer Becomes a Product Manager

  More

Last updated: 2023-04-07

  Author: Takuya Kitazawa

Takuya Kitazawa is a freelance software developer, previously working at a Big Tech and Silicon Valley-based start-up company where he wore multiple hats as a full-stack software developer, machine learning engineer, data scientist, and product manager. At the intersection of technological and social aspects of data-driven applications, he is passionate about promoting the ethical use of information technologies through his mentoring, business consultation, and public engagement activities. See CV for more information, or contact at [email protected].

  Support by donation   Gift a cup of coffee

  Disclaimer

  • Opinions are my own and do not represent the views of organizations I am/was belonging to.
  • I am doing my best to ensure the accuracy and fair use of the information. However, there might be some errors, outdated information, or biased subjective statements because the main purpose of this blog is to jot down my personal thoughts as soon as possible before conducting an extensive investigation. Visitors understand the limitations and rely on any information at their own risk.
  • That said, if there is any issue with the content, please contact me so I can take the necessary action.