Discussion Thread

  • 1.  Responsible ChatGPT/LLM Uses in the Industry

    Posted 30 days ago

    As large language models continue to become prominent, I think it needs to be accepted that they likely aren't going to go away - and as such we should be finding ways to responsibly leverage them in our own work. 

    I recently used ChatGPT to help me write an R script that is beyond my level of proficiency to have coded by hand (at least within a short period of time), but is not beyond my ability to check the results of. I find this to be an acceptable use-case since I can verify that code it generates is working correctly. However I also recognize that this doesn't mean that I am suddenly more proficient in R. 

    Has anyone else found use-cases in their day-to-day work that they would like to share?



    ------------------------------
    Christopher Seigel P.E., M.ASCE
    Civil Engineer
    ------------------------------


  • 2.  RE: Responsible ChatGPT/LLM Uses in the Industry

    Posted 25 days ago

    Hey Christopher, I'm still a student and no industry expert, but my feeling is that even this sort of careful use of chatGPT is shooting yourself in the foot. In my experience in programming classes, reading textbook code or understanding a demo is often deceptively easy. I've caught myself many times thinking that I understand something as soon as it isn't actively confusing, but long before I understand it well enough to use it. If I didn't put in the time and effort to learn each skill in practice, I wouldn't be able to see or understand later errors I came across. If you keep using chatGPT for R scripts, you're ensuring that you'll never be able to write them at that quality or higher, and you're opening yourself up to more mistakes by not deeply understanding the code. I think it's the same for pretty much any skill. Even whipping up quick blurbs is a valuable ability that won't develop or stick around if it isn't practiced. 
    Beyond skill acquisition, there's an argument against LLM use about responsibility. As the like goes, a computer can never be held accountable, therefore a computer must never make a management decision. I think it's deeply irresponsible as a civil engineer to bet the stability of a structure on the accuracy of an LLM, and deeply irresponsible of this profession to allow the critical safety checks in place to deteriorate by going unpracticed. I understand that this is a long way off from the example you gave, but it seems to me that normalizing casual LLM use is fast tracking us in that direction. 
    Finally, a point I often hear from architects and planners is that a civil engineer's job should not be merely to build by the book, but to consider the cause and effect of their work as fully as possible. This includes equity and environmental issues, issues which are at the heart of my desire to become a civil engineer, and issues which, to my knowledge, LLMs broadly make worse. If we care about these things as they pertain to infrastructure, why wouldn't we care about them in other areas, too?
    Again, I'm no expert on any of this, but the way I see it, I can't think of many uses of LLMs that don't deteriorate critical systems and skills in the long run in a way that a profession like civil engineering can't afford. 



    ------------------------------
    Martin Landis S.M.ASCE
    Atlanta Congress for the New Urbanism Student Liaison
    Atlanta GA
    ------------------------------



  • 3.  RE: Responsible ChatGPT/LLM Uses in the Industry

    Posted 23 days ago

    Really well said Martin, I echo a lot of these concerns. While I do use plenty of automation to make my work more enjoyable or efficient, I don't think a probabilistic tool has much of a place in that. I also regularly do my work without the benefit of the automations I've built up over the years (whenever my schedule allows) to make sure that I am not letting any of those skills atrophy by using a simplified method when it's crunch time. The only place I think it would really be practical for me to use ChatGPT is in writing Excel functions, but I don't think I'd be comfortable with that. Firstly, my company should be (and are) providing me with the time to develop those skills on my own and I think I would be cheating myself if I didn't take advantage of that. But also, if I'm not proficient enough to write the function myself, how do I know I'm proficient enough to confirm it's appropriate to the degree required for this industry? Like, has it shown me some operator or function that has limitations I'm not familiar with that will work fine for the example data I gave it, but will behave unexpectedly with some edge case I'll come across in my real work? Has it given me something that's fine for a dataset of 5 but won't behave the same way for a larger dataset? And less of a concern for Excel functions, but I know unknowingly introducing security flaws and vulnerabilities is a major concern for coding with it in general. I feel like I'd have to do so much verification that I might as well just do it myself to begin with. 



    ------------------------------
    Renn Henry, PE
    Staff Engineer
    ------------------------------



  • 4.  RE: Responsible ChatGPT/LLM Uses in the Industry

    Posted 23 days ago

    Hi Martin,

    I appreciate how seriously you're thinking about the implications of LLM use - both for individual learning and for the profession as a whole.

    You're absolutely right that understanding code (or any technical skill) on a deep level requires more than just getting it to work. That's exactly why I framed my use of ChatGPT as something done in conjunction with my ability to verify and reason through the outputs. I still need to understand what the code is doing, why it works, and how to catch errors.

    Let me pose a scenario to you: You work for a client who has asked you to develop some high-level statistics about their wastewater collection system. The task has a fairly short timeline and a limited budget - one appropriate for a trained programmer, but not for an engineer or amateur. You know you can perform the work, but it would take you weeks instead of days if you had to look up every piece of syntax, function, or package from scratch. You also have results for 20% of the data that you can use to check your work. Do you use the tools available to deliver the results on time and within the budget, or do you tell the client that you can't answer a very reasonable question?

    There's a reason that the topic of this post is asking others to identify acceptable use cases in the industry. You referenced IBM's 1970s slide which stated that a computer should never make a management decision since it cannot be held accountable - and I agree. Likewise, I absolutely do not support using LLMs to validate a structure's stability or take the place of any field's critical safety checks. But if civil engineers ignore LLMs altogether, the technology will evolve without our input. If we engage with them critically, we have a chance to help shape their application, limitations, and safeguards.

    As to your mention about the environmental impacts of LLMs and AI – this may end up being a deal-breaker in the long term. However, these tools are also being employed for environmental and climate-related projects. On this topic, it seems like its going to become a value engineering question. In other words, are these tools helping the environment more than harming it?  

    As you begin your career, I would like to encourage you to remain steadfast in your ethics while continuing to keep your eyes open for situations where LLMs might serve as tools, rather than decision-makers. I'm genuinely glad to know that students such as yourself are already thinking so deeply about these issues and are holding themselves to standards that the public deserves.



    ------------------------------
    Christopher Seigel P.E., M.ASCE
    Civil Engineer
    ------------------------------



  • 5.  RE: Responsible ChatGPT/LLM Uses in the Industry

    Posted 13 days ago

    Hi Martin,

    Thank you for engaging in this discussion.  You have introduced points that are worthwhile to look at more closely.

    To begin, there are a lot of tools that we currently use that we do not fully understand.

    How many engineers use analysis programs that they cannot write?  But we still use them; responsible engineers find ways to verify the accuracy of the results.  While the argument regarding whether computers can be held responsible is relevant for much of the current misuse of AI (aka ChatGPT/LLM), in this case it is a false equivalence.  When discussing responsible use of a new technology it is important to not confound different categories of use. Christopher did not use ChatGPT to let the computer make decisions, he used it to create a way to do a calculation that he needed, verified the results, and then used the results.

    Before calculators hundreds of engineers were required to do simple mathematics to carry out structural calculations for jobs.  There were tables of values for mathematical functions and people learned how to use slide rules.  Now we model buildings using computers and a lot of assumptions about modeling and then build structures that would have been difficult to impossible to safely design.  Is this an irresponsible use of technology?  I don't fully understand ETABS, while I do understand the principles behind it, and I use it to design structures.  I also verify the results, making me, not the computer responsible.  As a PE this can be a little scary since my responsibility endures until I die.

     A working engineer is constantly faced with new challenges and questions that need to be solved using the application of basic principles, calculations and verification of results.  New methods are introduced to this process and have resulted in huge strides in our technology.  My questions to Christopher (already essentially answered below) would be 1. why did you choose to use ChatGPT in lieu of programming it yourself? 2. how were you able to verify the results?  This allows me to begin to answer his question.  Then I might want to throw in some of my own concerns, such as measuring energy use against results.

    I too am deeply concerned about the energy consumption of AI.  In everyone's enthusiasm to implement it and not be left behind, it is often being used frivolously and in cases where less energy intensive methods would produce the same result.  Equity and environmental issues have for too long not been part of the engineering equation, and in a profession that intends to serve all people, putting these concerns in the forefront is long overdue.  I would like to see engineering learn to quantify the expense of projects to society in general.  Examples include adding life cycle analysis to the cost of building, or making developers pay a larger share of the infrastructure cost required to support expansion of a subdivision.  How do we insure that costs of projects are not externalized?  Is that outside of our scope as engineers?

      



    ------------------------------
    Sarah Halsey P.E., M.ASCE
    New York NY
    ------------------------------