Hi Martin,
I appreciate how seriously you're thinking about the implications of LLM use - both for individual learning and for the profession as a whole.
You're absolutely right that understanding code (or any technical skill) on a deep level requires more than just getting it to work. That's exactly why I framed my use of ChatGPT as something done in conjunction with my ability to verify and reason through the outputs. I still need to understand what the code is doing, why it works, and how to catch errors.
Let me pose a scenario to you: You work for a client who has asked you to develop some high-level statistics about their wastewater collection system. The task has a fairly short timeline and a limited budget - one appropriate for a trained programmer, but not for an engineer or amateur. You know you can perform the work, but it would take you weeks instead of days if you had to look up every piece of syntax, function, or package from scratch. You also have results for 20% of the data that you can use to check your work. Do you use the tools available to deliver the results on time and within the budget, or do you tell the client that you can't answer a very reasonable question?
There's a reason that the topic of this post is asking others to identify acceptable use cases in the industry. You referenced IBM's 1970s slide which stated that a computer should never make a management decision since it cannot be held accountable - and I agree. Likewise, I absolutely do not support using LLMs to validate a structure's stability or take the place of any field's critical safety checks. But if civil engineers ignore LLMs altogether, the technology will evolve without our input. If we engage with them critically, we have a chance to help shape their application, limitations, and safeguards.
As to your mention about the environmental impacts of LLMs and AI – this may end up being a deal-breaker in the long term. However, these tools are also being employed for environmental and climate-related projects. On this topic, it seems like its going to become a value engineering question. In other words, are these tools helping the environment more than harming it?
As you begin your career, I would like to encourage you to remain steadfast in your ethics while continuing to keep your eyes open for situations where LLMs might serve as tools, rather than decision-makers. I'm genuinely glad to know that students such as yourself are already thinking so deeply about these issues and are holding themselves to standards that the public deserves.
------------------------------
Christopher Seigel P.E., M.ASCE
Civil Engineer
------------------------------
Original Message:
Sent: 05-20-2025 07:00 AM
From: Martin Landis
Subject: Responsible ChatGPT/LLM Uses in the Industry
Hey Christopher, I'm still a student and no industry expert, but my feeling is that even this sort of careful use of chatGPT is shooting yourself in the foot. In my experience in programming classes, reading textbook code or understanding a demo is often deceptively easy. I've caught myself many times thinking that I understand something as soon as it isn't actively confusing, but long before I understand it well enough to use it. If I didn't put in the time and effort to learn each skill in practice, I wouldn't be able to see or understand later errors I came across. If you keep using chatGPT for R scripts, you're ensuring that you'll never be able to write them at that quality or higher, and you're opening yourself up to more mistakes by not deeply understanding the code. I think it's the same for pretty much any skill. Even whipping up quick blurbs is a valuable ability that won't develop or stick around if it isn't practiced.
Beyond skill acquisition, there's an argument against LLM use about responsibility. As the like goes, a computer can never be held accountable, therefore a computer must never make a management decision. I think it's deeply irresponsible as a civil engineer to bet the stability of a structure on the accuracy of an LLM, and deeply irresponsible of this profession to allow the critical safety checks in place to deteriorate by going unpracticed. I understand that this is a long way off from the example you gave, but it seems to me that normalizing casual LLM use is fast tracking us in that direction.
Finally, a point I often hear from architects and planners is that a civil engineer's job should not be merely to build by the book, but to consider the cause and effect of their work as fully as possible. This includes equity and environmental issues, issues which are at the heart of my desire to become a civil engineer, and issues which, to my knowledge, LLMs broadly make worse. If we care about these things as they pertain to infrastructure, why wouldn't we care about them in other areas, too?
Again, I'm no expert on any of this, but the way I see it, I can't think of many uses of LLMs that don't deteriorate critical systems and skills in the long run in a way that a profession like civil engineering can't afford.
------------------------------
Martin Landis S.M.ASCE
Atlanta Congress for the New Urbanism Student Liaison
Atlanta GA
Original Message:
Sent: 05-16-2025 10:36 AM
From: Christopher Seigel
Subject: Responsible ChatGPT/LLM Uses in the Industry
As large language models continue to become prominent, I think it needs to be accepted that they likely aren't going to go away - and as such we should be finding ways to responsibly leverage them in our own work.
I recently used ChatGPT to help me write an R script that is beyond my level of proficiency to have coded by hand (at least within a short period of time), but is not beyond my ability to check the results of. I find this to be an acceptable use-case since I can verify that code it generates is working correctly. However I also recognize that this doesn't mean that I am suddenly more proficient in R.
Has anyone else found use-cases in their day-to-day work that they would like to share?
------------------------------
Christopher Seigel P.E., M.ASCE
Civil Engineer
------------------------------