Discussion: View Thread

AI in Practice Today

  • 1.  AI in Practice Today

    Posted 12-04-2023 12:55 PM

    How are you using AI today, and how are you benefitting? Personally, I see AI as a boon to the civil engineering profession, leading to better outcomes, a better working environment, and greater job satisfaction. I also see the opportunity for greater creativity. The time to act is also now, or risk being left behind. 



    ------------------------------
    Mitch Winkler P.E.(inactive), M.ASCE
    Houston, TX
    ------------------------------


  • 2.  RE: AI in Practice Today

    Posted 12-14-2023 10:49 AM
    Edited by Tirza Austin 12-14-2023 10:50 PM

    I'm not presently using it outside of the sparing & careful use of OCR. I have a lot of reservations about the technology and industry as a whole so I'm curious how you're benefiting from it in your practice.







  • 3.  RE: AI in Practice Today

    Posted 12-14-2023 10:50 AM

    Hey Mitch,

    I haven't had any opportunities to use AI yet. The closest I have come is using various large language models such as ChatGPT or Bard. I've found success in using these tools for assistance in writing R scripts, or in inputting a script and asking it to explain what each line is doing. Since these large language models are still very frequently and confidently incorrect in their responses, I have found them safe to use in these applications strictly because I can check their results. If I ask for help with a line of code, I can try out the code and make sure its actually doing what ChatGPT says it will do. 

    Like any software, I would agree its going to be an overall benefit to the industry, but will need to be used knowledgably and responsibly, understanding its limits and never treating it like a black box. 



    ------------------------------
    Christopher Seigel P.E., M.ASCE
    Civil Engineer
    ------------------------------



  • 4.  RE: AI in Practice Today

    Posted 12-18-2023 10:53 AM

    I am mainly using it for Excel formulas, document summarization, and help with writing. I recently wrote a technical memorandum explaining an analysis we did and ChatGPT helped me come up with good language to describe it. It can also be useful for generating generic checklsits for any task you're working on.

    Outside of work, I use AI for generating transcripts of audio files and document summarization (e.g. summarize an item in my RSS feed to see if it's worth checking out in more detail).



    ------------------------------
    Samuel Steiner EI, A.M.ASCE
    Structural Designer
    Peoria, IL
    ------------------------------



  • 5.  RE: AI in Practice Today

    Posted 12-21-2023 10:18 AM

    Hi Mitch,

    Interesting post.

    Q. Given no less than 60% of engineer's projects fail . . .not due initially to tech...,then why

    would we argue for still more tech?

    For reference to this point, skim through through the book

    "The Blessings of Disasters."

    Cheers,

    Bill



    ------------------------------
    William M. Hayden Jr., Ph.D., P.E., CMQ/OE, F.ASCE
    Buffalo, N.Y.

    "It is never too late to be what you might have been." -- George Eliot 1819 - 1880
    ------------------------------



  • 6.  RE: AI in Practice Today

    Posted 12-21-2023 12:44 PM

    I think more tech might be an answer to mitigating failures and in particular, the failure to learn. AI conceptually has the ability to generate an accurate assessment for a given project of what might go wrong and how to mitigate. With AI, there should be no excuses. 

    I have "The Blessings of Disasters."  on my list of books I want to read. 



    ------------------------------
    Mitch Winkler P.E.(inactive), M.ASCE
    Houston, TX
    ------------------------------



  • 7.  RE: AI in Practice Today

    Posted 12-22-2023 11:03 AM

    I don't think that's quite true. The way these tools are built at the moment is by scraping bulk data from the internet, and then attempting to sanitize and label that data to assist computers in identifying statistical patterns in the distributions and proximities of words/phrases. Aside from the room for bias and error that those initial processes add, tools like ChatGPT just have no access to accuracy, engineering & safety principles, or even the concept of numbers; they are designed to spit out plausible seeming responses to prompts based on statistical patterns picked up from online text. There was a pretty good article about the limitations and potential of these tools for engineering practice in the ASCE magazine a few months ago that I'd highly recommend. https://ascelibrary.org/doi/epdf/10.1061/ciegag.0001687 



    ------------------------------
    Renn Henry
    Staff Engineer
    ------------------------------



  • 8.  RE: AI in Practice Today

    Posted 01-20-2024 11:48 AM

    Stephanie Slocum has an excellent article on AI in structural engineering in the January 2024 edition of Structure. Click here for the online version. Her thoughts echo many of my own. You do not have to be a structural engineer to benefit from this article and her wisdom.

    It would be great to hear from folks using the available AI tools, including ChatGPT, Claude, Google's Bard, and Microsoft's Bing, on how to get the most from them. It starts with the right prompts and strategy for drilling down.



    ------------------------------
    Mitch Winkler P.E.(inactive), M.ASCE
    Houston, TX
    ------------------------------



  • 9.  RE: AI in Practice Today

    Posted 01-22-2024 08:07 AM

    Dear Mitchell, thanks for sharing!!
    I have only used AI to check my work and rewrite my documents to improve the tone or style (ProWritingAid, DeepL, and Grammarly). However, tools such as Elicit can help you find references for your research (previous work) and establish a theoretical framework. AI is a tool and should be used with training like any other tool (e.g., a hammer...removing a nail can be dangerous in untrained hands). Good input data is the training that feeds the tool... the instructions (prompts: concise and complete). In the future, the tool will evolve, and so will we.
    Regards,
    AG



    ------------------------------
    Andres Guzman D.Eng., MEng, Ing., M.ASCE
    Associate Professor
    UNIVERSIDAD DEL NORTE
    Barranquilla
    ------------------------------



  • 10.  RE: AI in Practice Today

    Posted 01-22-2024 10:57 AM

    Here are a few of my concerns with AI in the professional setting:

    It is often clear it was used because it sounds unnatural. I've seen multiple posts that say things like "I bet you couldn't tell this was written with AI!" but it was pretty apparent before I got there that it wasn't written by the "author" of the message.

    If you have to vet all the information and you have to edit it to still have your "voice," you might as well have written it yourself.

    There is no substitute at this point for professional writing skills, yet many students are using AI to write their papers and emails. They are potentially entering the workforce without the skills needed to succeed. If you can't write well, how will you know when AI needs to be corrected?

    I can understand being stuck on how to word something and checking what a language model recommends, but, it is typically going to be faster to just write something well by yourself.

    I also find that over-use of AI makes me question the ethics of the creator. For example, I was recently trying to find a contractor for some work at my home. Many of the websites were repetitively phrased and said they operate specifically in my area. However, the list of other "nearby" towns they work in was a bizarre mashup of locations no human living around here would group together. Out of curiosity, I changed my search to have a different city name included, and the same websites came up with a different collection of names. They are nationwide companies who, from what I could tell, subcontract all their work, but they are using AI within their websites to try to seem like locally owned businesses. 

    I'm not saying that there is absolutely no use for AI in our field, but I've personally seen a lot more embarrassing output (incorrect information, unnatural and repetitive phrasing, too much flowery language for a technical topic, etc.) than I have benefit at this point. I think the biggest outcome from AI in the coming years will be a downturn in the already questionable nature of the trust-worthiness of content online. If formerly trustworthy sites start using AI to create content, and we don't know if they've done the legwork to check all the sources, then it will become increasingly difficult to find the actual facts.



    ------------------------------
    Heidi C. Wallace, P.E., M.ASCE
    Tulsa, OK
    ------------------------------



  • 11.  RE: AI in Practice Today

    Posted 01-23-2024 03:36 PM

    Heidi, on your comment:

    There is no substitute at this point for professional writing skills, yet many students are using AI to write their papers and emails. They are potentially entering the workforce without the skills needed to succeed. If you can't write well, how will you know when AI needs to be corrected?

    Even worse is bound to happen down the road – if academic institutions do not come up with methods and strategies to gauge the real depth-of-knowledge of students using such AIPPS (AI Powered Products and Services). Society may end up with an overload of academic degree holders who may not know what they are talking about. And the plan to counter the trend has to be formulated fast – because AIPPS are proliferating at an exponential rate amid the wide arena of regulation free cyberspace.

    Here are some six conclusions/suggestions I came up with in my article, Artificial Intelligence – the Tool of No Limit.

    1. AI has virtually no potential limits – either on its sprawling scientific capabilities or on navigating through regulation-free wide arena of various human activities. Only limit appears to be replicating all or some aspects of the mind phenomena.

    2. AIPPS represent a natural progression of human quests to improve upon lives and livelihoods. Its huge potentials to transform human civilization are beyond doubt. But such potentials ask for shouldering the responsibilities by all, including the systems of funding and advertisements. These cash-injecting powerful systems dictate the path – for AI entities to follow.

    3. The performance of AIPPS is as good as the models they implement – therefore it is important to select neutral ones that are not tainted with inclinations of any kind.

    4. It is important for authorities to chart out thoughtful directions for AI entrepreneurships to follow. But directions should not come at the cost of choking entrepreneurial initiatives.

    5. Also one has to realize the fact that a society cannot and should not rely on the goodwill of some programmers and their organizations for things to move in the right direction.

    6. Before divulging AIPPS out into the market place – it is important that they are calibrated with the societal moral and ethical values – to adapt AIPPS to them. Otherwise bad socioeconomic entropy (see Entropy and Everything Else) will creep in – to spiral down what common humanity has achieved. Perhaps it is important to slow down somewhat by phasing out progresses – to have time to reflect and evaluate the impacts of AIPPS on the future of mankind.

    Dilip

    -------

    Dr. Dilip K Barua, Ph.D

    Website Links and Profile




  • 12.  RE: AI in Practice Today

    Posted 01-24-2024 10:16 AM

    Your paper Artificial Intelligence – the Tool of No Limit. has some great examples of how AI is being used in civil engineering today and how it can be used in the future. As to the risks of AI, are they really different from those of any tool or approach? For civil engineers, and not speaking for society at large, the governing element to properly use AI tools comes back to our ethical obligation to work within one's competence. If one uses AI to generate text, the author has an ethical obligation to ensure the accuracy and integrity of their product. 



    ------------------------------
    Mitch Winkler P.E.(inactive), M.ASCE
    Houston, TX
    ------------------------------



  • 13.  RE: AI in Practice Today

    Posted 01-26-2024 10:48 PM
      |   view attached

    Good questions and comments, Mitch – thanks for that.

    Here are my takes addressing them in short:

    On risks or vulnerability – I have tried to sketch some in the cited article, Artificial Intelligence – the Tool of No Limit. They are generally applicable whether it is AIPPS or any other internet communication tool. The focus was mainly on AIPPS because of its huge exponentially growing potential in encompassing many of our activities – and we are all experiencing it already.

    • If one compares the two, I would say that AIPPS having the capability of continuous machine learning, flexibility and adaptability – is better equipped to prevent, or at least minimize (if they feel responsible to do so) user risks. The risk minimization job is a continuously evolving process – and they are increasingly implementing the Cryptosystem (the system encryption and decryption).

    • This does not mean, however that developers, executives and employees of AIPPS businesses are not the abuser themselves to maximize gains and profits in a regulation-free arena (as also highlighted in my article). Such abuses – if pursued and present – have far-reaching damaging consequences.

    • We must also understand that malicious threats and associated risks are also an ever-changing phenomena – because bad actors are very innovative, always trying to overcome barriers – to tamper with and open the check-valves to cause damages.

    • One must not forget that AIPPS dictated answers are highly dependent on the 'models' and methods they employ – therefore the questions of their uncertainty (more in The World of Numbers and Chances), neutrality and soundness – come to the forefront. Simply put, AIPPS is as good as the 'models' and the personnel-expertise it makes use of – or as bad as . . . Therefore, a different sort of risk is also there from this fact of AIPPS businesses.

    • Some NAP Mathematical Sciences Poster Sets shed further light on this – the attached one clarifies how AI Machine Learning depends on 'models' it applies – the quality and soundness of them – or the lack of such attributes.

    On ethical question – it is always there whether one uses AIPPS or any other tools. It is an individual's responsibility to check on ethics and ethical values – before divulging something and claiming that it is the product of his or her own intellectual endeavor. But, as for the easy access and use of some AIPPS tools – the temptations to cheat can be difficult to overcome by some (may even be the majority, if no checks and balances are in place).

    • Some of these tools – are not only helping in doing something, but also dictating solutions – luring the user to make carbon copies of them. Thus, they could inhibit an individual's zeal and efforts of figuring out solutions by himself or herself. Therefore – the questions of formulating and implementing measures or strategies to counter this damaging influence – came into focus.

    Dilip

    -------

    Dr. Dilip K Barua, Ph.D

    Website Links and Profile


    Attachment(s)

    pdf
    Machine-learning.pdf   9.65 MB 1 version


  • 14.  RE: AI in Practice Today

    Posted 01-29-2024 11:54 AM

    A Professional Reflection Request:

    a. Please make a review of the lessons in the following three (3) books:

    by Henry Petroski. 1992 

    • "Epic Engineering failures and The Lessons They Teach," by Stephen Ressler. 2022
    • "The Blessings of Disaster: The Lessons That Catastrophes Teach Us and Why Our Future Depends on It,"  by Michel Bruneau. 2022

    b. Do a root-cause thought process asking "Why" 5 times for the disasters.

    Q. Now, do you still think still more tech is the answer?

    Cheers,

    Bill



    ------------------------------
    William M. Hayden Jr., Ph.D., P.E., CMQ/OE, F.ASCE
    Buffalo, N.Y.

    "It is never too late to be what you might have been." -- George Eliot 1819 - 1880
    ------------------------------



  • 15.  RE: AI in Practice Today

    Posted 02-01-2024 02:06 PM

    Bill – I haven't read the referred books – therefore cannot pass comments on them. Instead, I will attempt to venture into some points you raised.

    Here are some of my thoughts:

    A. Engineer Is Human & Learning from Failures.

    All are human – including engineers. And yes, individually, we all learn from failures (they are painful and happen to be conditioned cases of different degrees) – that's how progress is ensured – unless failures cause some sort of total breakdowns or collapse that inhibit the learning processes or coming back stronger.

    1. The question is whether our employers, our governing institutions and societal attitudes – see us as such. Perhaps they do – perhaps they do not. The governing principles of modern societies are the manifestation of what is known as the mechanical civilization (more in The Grammar of Industrialization – Standards, Codes and Manuals) – where standardization efforts have yielded one blanket checklist after another – that are enforced to manage things without paying due attention to details or concerns of individual cases and circumstances. It is not that the standardization efforts are not required – but they head towards unhealthy directions when intransigence or rigidity takes control. Therefore, failures of sorts – as natural as one perceives them to be – are considered punishable in one way or another. Do we ever think of the costs (monetary and otherwise) of such policies?

    2. Perhaps individually – we can do things like (more in Hold It There): . . .

    Take it easy and do not over-stretch yourself to frustration and despair

    If the pursuits to perfection – to achieving equilibrium

    Do not produce immediate results to your liking

    Because, despite all out efforts – more often than not

    We are influenced by the surrounding – that we do not have control over . . .

    B. Is More Tech the Answer?

    1. Yes – definitely, Yes. Let me justify the answer with what are written in The Quantum World: . . . Our world – let us say a civil engineer's world is satisfied with the powerful deterministic paradigm of Newtonian (Isaac Newton, 1642 – 1727) physics, or the so-called classical physics. This reality has led French scientist PS Laplace (1749 – 1827) to declare that determinism is sound and solid, and is the only method needed to solve any of world's problems including social relations [Further reinforced by the convictions of later scientists: American scientist AA Michelson (1852 – 1931; it would consist of adding a few decimal places to results already obtained), and British scientist WT Kelvin (1824 – 1907; everything was perfect in the landscape of physics except for two dark clouds)]. Despite declaration from such renowned scientists, frontiers of science did not stop questioning conventional wisdom – while at the same time looking for breakthroughs. . . . Technology and engineering just follow the scientific leads. But, yes, because of the high impacts of technological advances such as AI – perhaps it is important to slow down somewhat – to have time to reflect.

    2. As I see it – your question deserves more than the answer above. The other answer is: No – definitely, Not. Let me justify it with our general realization (more in The All-embracing Power of Sublimities): . . . are we better than in the past – in terms of social cohesion and other personal and societal problems? The answer to this question is definitely: we are not. True, we have advanced far beyond our past – in science, technology, comfort and gadgets. But one has many reasons to say that our social conditions – in alleviation of mistrusts, conflicts and violence, have not inched forward in parallel with material progress – if anything, some conditions might have worsened. . . Therefore, Tech is not to blame (although it can be in some cases, as highlighted in the cited Article) – it is the social leaderships who fail to see the implications – or are incompetent (could even be loaded with inclinations of some sort) in setting directions for Tech to follow. I have seen news somewhere that even Tech companies themselves wanted to be regulated! Where is the problem then? Perhaps one of the problems is that our social leaders are more interested in indulgence of in-fighting – competing with one another to catch news headlines – thus spending most of their energies in matters (including next election) other than serving people. Even, if they produce something, they lack due diligence and go against the interests of some. Thus, their main mandated job gets undone or is done lousily.

    C. Root-cause Thought Processes.

    The question asked in the last line of bullet-1 (in A) & the dilemma shown in answers to B – indicate that, yes, we are neither willing nor have any interest in spending time to think of the root-causes of problems or inconsistencies (whatever one likes to call them). Not that, some are not thinking and trying to raise voices – but, they are immediately muted or sent to backbenches by very loud and pervasive political and media rhetoric, advertisements and other avenues of influences. Perhaps – a label like unimportant nuisance is used to describe such a thinking. So, wouldn't you say that – our thought processes looking to find root-causes – may continue remain in the dark waiting for a shed of light to wisdom.

    Dilip

    -------

    Dr. Dilip K Barua, Ph.D

    Website Links and Profile




  • 16.  RE: AI in Practice Today

    Posted 02-07-2024 10:16 AM

    Thanks, as always Dilip, for your thoughtful reply and insights.

    "     "All are human – including engineers. And yes, individually, we all learn from failures (they are painful and happen to be conditioned cases of different degrees)"

    After even just skimming through the 3 books noted as well as other papers, it is clear based on the results that

    "No, we do not all learn from failures." 

    And moving towards even more tech when the root-causes for failure are deeply rooted with the lack

    knowledge regarding people, process, and leadership invites more disasters.

    Cheers,

    Bill



    ------------------------------
    William M. Hayden Jr., Ph.D., P.E., CMQ/OE, F.ASCE
    Buffalo, N.Y.

    "It is never too late to be what you might have been." -- George Eliot 1819 - 1880
    ------------------------------



  • 17.  RE: AI in Practice Today

    Posted 02-06-2024 10:35 AM

    This is an interesting topic and worthy of discussion. In addition to the (excellent) books recommended, I'll suggest the following . . .

        -   The Innovators Dilemma: When New Technologies Cause Great Firms to Fail by Clayton M. Anderson

        -   Unmasking AI: My Mission to Protect What is Human in a World of Machines by Joy Buolamwini

     My concern lies with the geometrical integrity of 3-D spatial data - used extensively by engineers (and others). In particular, Buolamwini addresses the "algorithmic justice" of AI as related to misuse in facial recognition. That concern translates to a discussion of AI and algorithmic integrity and the "elephant in the room." An overall resource and a particular discussion of AI and 3-D can be found at . . .

       - http://www.tru3d.xyz - a collection of articles promoting the use of a 3-D model for 3-D spatial data.

       - http://www.globalcogo.com/3D-and-AI.pdf - written in response to the Buolamwini book.

    The process of abstraction provides some surprises when starting with the assumption of a single origin for 3-D spatial data - that is, the 3-D Global Spatial Data Model (GSDM).

      



    ------------------------------
    Earl Burkholder P.E., P.S., F.ASCE
    President
    Global COGO, Inc.
    Las Cruces NM
    ------------------------------