Published on: Monday, August 28, 2023 News

The Little Chatbot that Could (but Should It?)

By Jessica Jones

It’s hard to read the headlines now and not encounter news stories, think pieces, and personal opinions of varying levels of informed-ness about artificial intelligence (AI). We can feel all kinds of ways about it, but I believe we all have an obligation to do what we can to learn to recognize, use, and coexist with it, in order to mitigate harmful potential.

To be clear, a lite version of AI is what we are seeing now with ChatGPT, Google’s Bard, and Microsoft’s Sydney. None of these are really what we associate with the term “artificial intelligence.” None are 2001: A Space Odyssey’s HAL, for instance. They are powerful programs and algorithms that can be widely manipulated, though—so for the purposes of this article, when referring to “AI,” I am referencing the likes of ChatGPT, Bard, and Sydney.

Much of my personal background is in the humanities (emphasis on human-ities), and I fully recognize that it is a bias I bring to this topic, so efforts have been made to perform due diligence in understanding the positive potential of AI, as well as its pitfalls.

I wanted a more holistic look at the implications of AI in our lives, so I spoke with a friend who has as well-rounded an outlook on it as I could imagine. Dr. Jason Hemann is a professor of computer science at Seton Hall University with degrees in history, philosophy, and computer science—a perfect combination to put AI in context.

Dr. Hemann says that he has already changed how he teaches based on AI’s accessibility to the public. He believes that the technology isn’t going anywhere and that we need to adapt to it. For instance, in addition to straightforward coding assignments, Dr. Hemann has asked students to give a chatbot instructions to write the code for a program. The students have to learn how to effectively delegate tasks to it and understand its limitations. To demonstrate mastery of the material, he asks them to evaluate the programs that AI writes, as well as scale the programs up to test their flexibility and usability.

This type of assignment is an excellent example of applied machine learning, a field that is gaining exponential prominence in computer science and engineering curricula. Machine learning is essentially the practice of teaching computers (machines) how to learn and produce the desired outcomes.

Machine learning will facilitate the automation of many tasks, but humans still need to understand the code it generates. AI will never fully replace programmers in the same way that Google hasn’t replaced librarians (as so many people have been predicting … for years). But, just as the Internet has changed the way libraries operate, AI will likely change the programming landscape.

Will we need as many programmers in five years as we have today? I don’t know, but as AI comes for white collar jobs, I do expect conversations about universal basic income to escalate. But that’s another topic for another day!

One of the benefits of AI and machine learning is that programming and coding will become more accessible to people who do not have extensive experience and training—much of which is expensive and time consuming. Less gatekeeping can mean more inclusive innovation and fewer barriers to entrepreneurship. It can also be a form of informal oversight.

A more diverse programming landscape is good for all of us, especially in light of the biases that programmers can bring to their code. Scientific American published an article in May 2023 that found that “law enforcement agencies that use automated facial recognition disproportionately arrest Black people. We believe this results from factors that include the lack of Black faces in the algorithms’ training data sets, a belief that these programs are infallible and a tendency of officers’ own biases to magnify these issues.”

The AI bot Midjourney was asked to generate images of professors in different areas of study, and almost all of the generated images appeared to be white people, and the majority appeared to be men. That is not entirely misrepresentational of the demographics of American academia, but is it what academia should look like? Is that an impression we want to reinforce? This is a rhetorical question of course, because a diverse instructional body is better able to connect with a more diverse study body, which in turn facilitates better learning outcomes.

Earlier this year, a new Drake and The Weeknd track, “Heart on My Sleeve,” hit streaming services and quickly went to the top of the chart, but it wasn’t actually Drake and The Weeknd. Their voices, lyrics, and beats were generated by AI. The artists’ record companies immediately mobilized to have the tracks taken down because of copyright infringement, but the other issue that arises is
more existential: What is the role of authenticity in our lives now?

Is “Heart on My Sleeve” a real song? If the person(s) who wrote the commands that generated the work is not Black, does using Drake’s and The Weeknd’s likenesses count as cultural appropriation? How do we know how to respond to art when we doubt its origins?

I listened to the song, “Heart on My Sleeve,” and it was good! It sounds like Drake’s and the Weeknd’s voices, the lyrics are interesting, and the beat is catchy. If AI can fake art this convincingly, I feel for the English and history instructors out there who are already inundated with AI-generated essays and papers in their grading piles.

AI is a powerful tool, but it is not above criticism. It opens doors, but it can also reinforce problematic practices and ideologies from racism to plagiarism to copyright infringement. Now, none of us is above falling for a fake. Keep questioning, keep factchecking, and, when in doubt, go to the Library!

For more articles like this, check out the August Newsletter: https://takomaparkmd.gov/news/newsletter/