The Intercollegiate Student Magazine

Language Models Need Innovative Regulation

A detailed close-up of a computer motherboard showcasing intricate circuitry patterns, gleaming metallic components, and multiple square microchips. The central chip is prominently displayed, surrounded by radiant pathways, capacitors, and other electronic elements, all illuminated by a soft, ambient glow.

“We can only see a short distance, but we can see plenty there that needs to be done.”

—Alan Turing

In a world on the cusp of a technological renaissance, a revolutionary force began to weave its way into the fabric of the urban landscape. The new technology, aptly named “the skyscraper”, began emerging in Chicago and soon spread to other cities like New York, Boston, and Pittsburgh. Steel-framed structures rose high above the city, casting long shadows onto the streets below. Citizens quickly expressed concerns that these new structures would prevent light and air from entering the city, leading Chicago to place a height limit of 130 feet on all newly constructed buildings in 1893.

Looking at today’s skylines, it’s hard to imagine how a height restriction would have benefited a city’s economic development or quality of life. However, at the time, people had not interacted with this new type of architecture to understand how it should be regulated and how it could enhance their lives. We are now at the cusp of another technological renaissance with the emergence of large language models (LLMs), and we have to, once again, grapple with the challenges of how to regulate this new technology. 

Ever since November of last year when OpenAI released their chatbot, ChatGPT, the general population has had access to this unprecedented technology. Individuals have demonstrated its unique ability to write code, compose music, or assist with academic research. 

But this technology isn’t just revolutionizing information retrieval; it’s opening up entirely new avenues in education. ChatGPT is emerging as a tool that enhances creativity, facilitates learning, and augments problem-solving capacities. Chatbots like ChatGPT are already becoming a substitute for Google search for many people, as they not only provide quicker answers that are easier to understand but also have the ability to explain their explanations, which can significantly help people grasp and understand complex topics. Moreover, LLMs can personalize interactions, allowing users to feel like they are having a conversation rather than just receiving information. 

This level of interactivity and adaptability makes LLMs more than just information dispensers; they can be educators, facilitators, and any other expert one needs them to be. As technology continues to advance, its potential applications and roles in our daily lives will only expand, reshaping how we seek and consume information.

Despite their merits, LLMs have not been immune to the classic “paranoia” stage associated with new technologies, where controversies arise, and some even advocate for a complete shutdown of all future developments. Skeptics claim that humanity is treading on dangerous grounds, potentially unlocking forces it cannot control. With AI, concerns are not just about job displacement or economic disruption, but also about deeper philosophical and ethical dilemmas related to academic integrity, originality, and the essence of human creativity.

The introduction of ChatGPT in our daily lives has prompted an alarmist response from educators who worry that the tool will encourage academic dishonesty, due to the chatbot’s proven ability to excel on exams and other academic tasks. This brings to mind a compelling question: If an AI can achieve such high scores on an assignment, how well is it really testing a human student’s understanding? Is there a critical element––a human element––that differentiates a student’s grasp of a concept from an AI that is only attempting to predict its next word?

A timeless question professors should ask themselves when drafting up assignments is this: “Given the abundance of tools and information available to students today, how easily could a student pass through this class without truly understanding any of the material?” In the Information Age, this question has always been relevant.

When the internet first became accessible, finding answers online presented a massive issue to the integrity of students’ work. But, we did not shut down the internet because of its potential to facilitate academic dishonesty. Instead, professors created questions based on critical thinking, not rote memorization. They also stopped reusing the same questions every year, and some even deviously uploaded fake solutions to websites where they knew students would be snooping around.

To meet the capabilities of ChatGPT, professors will have to change their evaluation methods to reward a human understanding of the material. While there will be a pain period as curriculums transition, the change has the potential to actually improve the value of education. If a language model can get passing grades in a course, then the course is clearly in need of a revamp. 

Additionally, the potential for LLMs to overcrowd our information space will require the creation of new methods of regulation. AI-generated content has already found its way into consumer products through books sold on e-commerce sites such as Amazon and Etsy, sometimes under the names of authors who had nothing to do with them. The ability for LLMs to rapidly generate material has the potential to flood an environment with derivative content, resulting in a digital landscape where distinguishing between what’s authentic and what’s mass-produced becomes increasingly time-consuming and arduous. This could also undermine the efforts of genuine content creators and pose considerable challenges for consumers seeking well-curated information. 

In the case of e-commerce sites, they will need to instate processes to restrict and regulate the presence of AI-generated content on their websites. Of course, such regulations would eventually need to be enforced by new legislation (which could be several years away). 

Beyond these couple of examples, LLMs will require creative methods of regulation, and it will take us time for us to understand how to properly do so. Some regulations will prove to be too restrictive, such as the height restrictions in the case of skyscrapers. In this historical example, many cities eventually adopted innovative zoning solutions, like the concept of “air rights”, which prevented cities from becoming overly dense and overcrowded while still allowing buildings to rise high above the ground. Our regulations on LLMs should similarly seek to allow restriction and improvement to exist in a delicate equilibrium. 

Considering the need for thoughtful regulation, we should view LLMs as tools that will work alongside us in different aspects of our daily lives, from professional to personal settings. Like in academia, there is still a critical human element that remains absent from answers output by LLMs and other AI tools. The element may be indescribable, but as humans, we will learn to detect when something does not authentically originate from another human being. 

Employers, educators, and anyone looking to replace human jobs with AI (and specifically those looking to replace those jobs with LLMs) will eventually witness the effects of the missing human element. Although it is forecasted that AI could replace nearly 300 million jobs, this replacement may only be temporary as we understand how to employ the optimal balance between authentic human interaction and artificial intelligence.

Only time will tell, and that’s the crucial component that we all need to accept. Banning, heavily restricting, or fearing AI tools such as LLMs will not benefit anyone. LLMs have tremendous potential to transform our lives in positive ways, but only if we use them correctly.

Luke Contreras is a student at the University of Chicago studying computer science and economics. You can read more of his work at the Chicago Maroon.

~ Also Read ~

Louisiana State University

While psychology is still a budding seed, it isn’t hard to imagine it firmly planting itself in the garden of hard sciences. As with most sciences before it, psychology can and must transition from the realm of conjecture to the realm of calculation.

Tulane University

Sure, our parents may have smoked hash in public parks and spent the eighties enjoying the various pleasures of the early days of Wall Street, but for Gen Z, carnality is synonymous with culture. 

Tulane University

Porn, as a whole, is never ‘just fucking’. Even something that appears to be ‘just fucking’, it’s still fucking with the camera placed just so, your bodies placed just so, and it’s choreographed! Even if you wanna make it seem completely authentic, that also is a choice.