Upgrade to Pro — share decks privately, control downloads, hide ads and more …

プロセス制御x 自然言語処理の現在地と展望 / Current State and Futur...

Sponsored · Your Podcast. Everywhere. Effortlessly. Share. Educate. Inspire. Entertain. You do you. We'll handle the rest.
Avatar for Shota Kato Shota Kato
February 03, 2026
120

プロセス制御x 自然言語処理の現在地と展望 / Current State and Future Outlook for Process Control and Natural Language Processing

2026年1月23日「AI制御ワーキング第1回研究会」発表資料
URL: https://www.psec.jp/ai%E5%88%B6%E5%BE%A1wg%E7%AC%AC1%E5%9B%9E%E7%A0%94%E7%A9%B6%E4%BC%9A/

Avatar for Shota Kato

Shota Kato

February 03, 2026
Tweet

More Decks by Shota Kato

Transcript

  1. ࣗݾ঺հɿՃ౻ↅଠ 1 ژେӃ޻ֶݚڀՊ Խֶ޻ֶઐ߈मྃ ಉେֶӃ৘ใֶݚڀՊ γεςϜՊֶઐ߈मྃ r ಉॿڭ rϚϯνΣελʔେֶ ٬һݚڀһ

    r ࢈૯ݚটᡈݚڀһ ઐ໳ɿԽֶ޻ֶYσʔλαΠΤϯεYࣗવݴޠॲཧ ݚڀ಺༰ • ੡଄ϓϩηεͷϞσϦϯάɺ੍ޚ • ෺ཧϞσϧߏஙͷޮ཰Խ 9 5XJUUFS !T@LBU झຯΧϝϥɼαΠΫϦϯάɼΰϧϑɼ ΞΠϦογϡ΢ΠεΩʔʢ#VTINJMMTʣɼήʔϜ
  2. • ݴޠϞσϧʢ-.ʣ͸୯ޠྻͷ֬཰𝑝(𝑥! , ⋯ , 𝑥" ) Λ༧ଌ͢Δɽ • -.Λ༻͍Δ͜ͱͰɼ୯ޠྻʹଓ͘୯ޠΛ༧ଌͰ͖Δɽ

    • େن໛ݴޠϞσϧͱ͸ɼେن໛ͳσʔλͰֶश͞Εͨύϥϝʔλ਺ͷଟ͍-.ɽ -BSHFMBOHVBHFNPEFM--. • جຊతʹ͸ੑೳ㲍ύϥϝʔλ਺ɽ࠷ۙ͸#Λ௒͑ΔPQFOXFJHIU--.΋૿͖͑ͯͨɽ ݴޠϞσϧʢ-BOHVBHF.PEFM-.ʣ 3 𝑝 京都, の, 名物, は, 納豆 = 0.00000082 𝑝 京都, の, 名物, は, ラーメン = 0.00000197 𝑝 京都, の, 名物, は, 寿司 = 0.00000019 ɾɾɾ ژ౎ͷ໊෺͸ʁʁʁ / 𝑥" = argmax 𝑝 𝑥! , ⋯ , 𝑥"#! , 𝑥" 𝑥" ʙԯʢ#ʣݸ
  3. • ೖྗʢϓϩϯϓτʣΛ޻෉͢Δ͚ͩͰ༷ʑͳλεΫΛղ͚Δɽ • େن໛ݴޠϞσϧͰߴ͍ੑೳΛୡ੒͢ΔͨΊʹɼ ଟ͘ͷϓϩϯϓτઃܭํ๏͕ఏҊ͞Ε͍ͯΔ [Liu+, ACM Comput. Surv., 2023]ɽ

    • จ຺಺ֶशʢJODPOUFYUMFBSOJOH*$-ʣ ʢGFXTIPUֶशͱ΋ݺͿʣ[Brown+, NeurIPS, 2020] • ࢥߟ࿈࠯ܕϓϩϯϓτ $IBJOPG5IPVHIUQSPNQUJOH$P5 [Wei+, NeurIPS, 2022] • 4FMGDPOTJTUFODZʢଟ਺ܾʣ [Wang+, ICLR, 2023] • LLMʹΑΔߴੑೳୡ੒ͷͨΊʹɼ Ϟσϧʢֶशσʔλ΍ύϥϝʔλ਺ɼreasoningͷ༗ແͳͲʣͷબఆɼ λεΫʹಛԽͨ͠ϫʔΫϑϩʔͷఆٛͳͲ͕ߦΘΕͯΔɽ --.͸Կ͕Ͱ͖Δ͔ʁ 5
  4. • --.ʹɼ࣮૷໨తΛೖྗͨ͠Β࣮૷ίʔυͱ݁Ռ͕ಘΒΕΔɽ ͕ɼΑ͘໰୊͕ى͜Δɽ • --.͕ೖྗͷઐ໳༻ޠΛཧղͰ͖ͳ͍ɽ • --.ͷग़ྗܗ͕ࣜࢦࣔ௨Γͱ͸ҟͳΔɽ • ίʔυ͸ಘΒΕ͕ͨɼಈ͔ͳ͍ɽ •

    ҙਤ௨Γͷ࣮૷Λͤͣɼྫ֎ॲཧͷΈΛ࣮૷͢Δɽ • ʜ ྫɿ--.ʹΑΔ৽੍ޚख๏ͷ࣮૷ίʔσΟϯά 7 ʻೖྗʼख๏"ΑΓ΋ߴੑೳͳख๏ ΛఏҊ͢Δɽର৅ܥ͸ɼϓϩηε9 ͱ͠ɼγϛϡϨʔγϣϯͰ༗༻ੑ Λࣔ͢ɽ --. ʻग़ྗʼϓϩηε9Λର৅ͱͨ͠ ࣮ݧʹ͓͍ͯɼఏҊख๏͕ख๏" ΑΓ΋༏Ε͍ͯΔ͜ͱΛ࣮ূ͠· ͨ͠ɽίʔυ͸ɼʜ
  5. • --.ʹɼ࣮૷໨తΛೖྗͨ͠Β࣮૷ίʔυͱ݁Ռ͕ಘΒΕΔɽ ͕ɼΑ͘໰୊͕ى͜Δɽ • --.͕ೖྗͷઐ໳༻ޠΛཧղͰ͖ͳ͍ɽˠ3"(ʹΑΔ஌֦ࣝॆ • --.ͷग़ྗܗ͕ࣜࢦࣔ௨Γͱ͸ҟͳΔɽ • ίʔυ͸ಘΒΕ͕ͨɼಈ͔ͳ͍ɽ •

    ҙਤ௨Γͷ࣮૷Λͤͣɼྫ֎ॲཧͷΈΛ࣮૷͢Δɽ • ʜ ྫɿ--.ʹΑΔ৽੍ޚख๏ͷ࣮૷ίʔσΟϯά 8 ʻೖྗʼख๏"ΑΓ΋ߴੑೳͳख๏ ΛఏҊ͢Δɽର৅ܥ͸ɼϓϩηε9 ͱ͠ɼγϛϡϨʔγϣϯͰ༗༻ੑ Λࣔ͢ɽ --. ʻग़ྗʼϓϩηε9Λର৅ͱͨ͠ ࣮ݧʹ͓͍ͯɼఏҊख๏͕ख๏" ΑΓ΋༏Ε͍ͯΔ͜ͱΛ࣮ূ͠· ͨ͠ɽίʔυ͸ɼʜ
  6. • --.ʹɼ࣮૷໨తΛೖྗͨ͠Β࣮૷ίʔυͱ݁Ռ͕ಘΒΕΔɽ ͕ɼΑ͘໰୊͕ى͜Δɽ • --.͕ೖྗͷઐ໳༻ޠΛཧղͰ͖ͳ͍ɽˠ3"(ʹΑΔ஌֦ࣝॆ • --.ͷग़ྗܗ͕ࣜࢦࣔ௨Γͱ͸ҟͳΔɽˠߏ଄Խग़ྗʹΑΔܗ੍ࣜݶ • ίʔυ͸ಘΒΕ͕ͨɼಈ͔ͳ͍ɽ •

    ҙਤ௨Γͷ࣮૷Λͤͣɼྫ֎ॲཧͷΈΛ࣮૷͢Δɽ • ʜ ྫɿ--.ʹΑΔ৽੍ޚख๏ͷ࣮૷ίʔσΟϯά 10 ʻೖྗʼख๏"ΑΓ΋ߴੑೳͳख๏ ΛఏҊ͢Δɽର৅ܥ͸ɼϓϩηε9 ͱ͠ɼγϛϡϨʔγϣϯͰ༗༻ੑ Λࣔ͢ɽ --. ʻग़ྗʼϓϩηε9Λର৅ͱͨ͠ ࣮ݧʹ͓͍ͯɼఏҊख๏͕ख๏" ΑΓ΋༏Ε͍ͯΔ͜ͱΛ࣮ূ͠· ͨ͠ɽίʔυ͸ɼʜ
  7. • --.ͷग़ྗܗࣜΛݻఆ͢Δɽ • ࣗ༝จ͸ݕূ͕ࠔ೉ͰͰ͋ΔͨΊɼεΩʔϚΛఆٛ͢Δɽ • ߏ଄ͱ಺༰ΛΘ͚ͯධՁ͢Δ͜ͱͰอकɾվળ͠΍͘͢͢Δɽ • +40/ܗࣜͷग़ྗ͕Α͘༻͍ΒΕ͍ͯΔɽ [Geng+, arXiv,

    2025] • ߏ଄Խ͢Δ͜ͱͰɼطଘͷϧʔϧνΣοΫ ΋׆༻Ͱ͖Δɽ ݕূՄೳʹ͢ΔͨΊͷඞਢٕज़ɿߏ଄Խग़ྗ 11 ߏ଄Խͳ͠ ʻೖྗʼख๏"ΑΓ΋ߴੑೳͳख๏ ΛఏҊ͢Δɽର৅ܥ͸ɼϓϩηε9 ͱ͠ɼγϛϡϨʔγϣϯͰ༗༻ੑ Λࣔ͢ɽ --. ʻग़ྗʼϓϩηε9Λର৅ͱͨ͠ ࣮ݧʹ͓͍ͯɼఏҊख๏͕ख๏" ΑΓ΋༏Ε͍ͯΔ͜ͱΛ࣮ূ͠· ͨ͠ɽίʔυ͸ɼʜ ߏ଄Խ͋Γ ର৅ϓϩηεɿϓϩηε9 ม਺ɿ<7 U ʜ> طଘख๏ʹΑΔੑೳɿʜ ʜ
  8. • --.ʹɼ࣮૷໨తΛೖྗͨ͠Β࣮૷ίʔυͱ݁Ռ͕ಘΒΕΔɽ ͕ɼΑ͘໰୊͕ى͜Δɽ • --.͕ೖྗͷઐ໳༻ޠΛཧղͰ͖ͳ͍ɽˠ3"(ʹΑΔ஌֦ࣝॆ • --.ͷग़ྗܗ͕ࣜࢦࣔ௨Γͱ͸ҟͳΔɽˠߏ଄Խग़ྗʹΑΔܗ੍ࣜݶ • ίʔυ͸ಘΒΕ͕ͨɼಈ͔ͳ͍ɽˠ--.ʹΑΔπʔϧ࢖༻ •

    ҙਤ௨Γͷ࣮૷Λͤͣɼྫ֎ॲཧͷΈΛ࣮૷͢Δɽ • ʜ ྫɿ--.ʹΑΔ৽੍ޚख๏ͷ࣮૷ίʔσΟϯά 12 ʻೖྗʼख๏"ΑΓ΋ߴੑೳͳख๏ ΛఏҊ͢Δɽର৅ܥ͸ɼϓϩηε9 ͱ͠ɼγϛϡϨʔγϣϯͰ༗༻ੑ Λࣔ͢ɽ --. ʻग़ྗʼϓϩηε9Λର৅ͱͨ͠ ࣮ݧʹ͓͍ͯɼఏҊख๏͕ख๏" ΑΓ΋༏Ε͍ͯΔ͜ͱΛ࣮ূ͠· ͨ͠ɽίʔυ͸ɼʜ
  9. • --.ʹɼ࣮૷໨తΛೖྗͨ͠Β࣮૷ίʔυͱ݁Ռ͕ಘΒΕΔɽ ͕ɼΑ͘໰୊͕ى͜Δɽ • --.͕ೖྗͷઐ໳༻ޠΛཧղͰ͖ͳ͍ɽˠ3"(ʹΑΔ஌֦ࣝॆ • --.ͷग़ྗܗ͕ࣜࢦࣔ௨Γͱ͸ҟͳΔɽˠߏ଄Խग़ྗʹΑΔܗ੍ࣜݶ • ίʔυ͸ಘΒΕ͕ͨɼಈ͔ͳ͍ɽˠ--.ʹΑΔπʔϧ࢖༻ •

    ҙਤ௨Γͷ࣮૷Λͤͣɼྫ֎ॲཧͷΈΛ࣮૷͢ΔɽˠΤʔδΣϯτʹΑΔ֬ೝ • ʜ ྫɿ--.ʹΑΔ৽੍ޚख๏ͷ࣮૷ίʔσΟϯά 14 ʻೖྗʼख๏"ΑΓ΋ߴੑೳͳख๏ ΛఏҊ͢Δɽର৅ܥ͸ɼϓϩηε9 ͱ͠ɼγϛϡϨʔγϣϯͰ༗༻ੑ Λࣔ͢ɽ --. ʻग़ྗʼϓϩηε9Λର৅ͱͨ͠ ࣮ݧʹ͓͍ͯɼఏҊख๏͕ख๏" ΑΓ΋༏Ε͍ͯΔ͜ͱΛ࣮ূ͠· ͨ͠ɽίʔυ͸ɼʜ
  10. • --.ʹɼ࣮૷໨తΛೖྗͨ͠Β࣮૷ίʔυͱ݁Ռ͕ಘΒΕΔɽ • ྑ͍݁ՌΛಘΔʹ͸ɼ޻෉͕ඞཁɽ • --.͕ೖྗͷઐ໳༻ޠΛཧղͰ͖ͳ͍ɽˠ3"(ʹΑΔ஌֦ࣝॆ • --.ͷग़ྗܗ͕ࣜࢦࣔ௨Γͱ͸ҟͳΔɽˠߏ଄Խग़ྗʹΑΔܗ੍ࣜݶ • ίʔυ͸ಘΒΕ͕ͨɼಈ͔ͳ͍ɽˠ--.ʹΑΔπʔϧ࢖༻

    • ҙਤ௨Γͷ࣮૷Λͤͣɼྫ֎ॲཧͷΈΛ࣮૷͢ΔɽˠΤʔδΣϯτʹΑΔ֬ೝ • ʜ --.ʹΑΔ৽੍ޚख๏ͷ࣮૷ίʔσΟϯάΛ࣮ݱ͢Δʹ͸ 16 ʻೖྗʼख๏"ΑΓ΋ߴੑೳͳख๏ ΛఏҊ͢Δɽର৅ܥ͸ɼϓϩηε9 ͱ͠ɼγϛϡϨʔγϣϯͰ༗༻ੑ Λࣔ͢ɽ --. ʻग़ྗʼϓϩηε9Λର৅ͱͨ͠ ࣮ݧʹ͓͍ͯɼఏҊख๏͕ख๏" ΑΓ΋༏Ε͍ͯΔ͜ͱΛ࣮ূ͠· ͨ͠ɽίʔυ͸ɼʜ
  11. ୈճ৽ϓϩηε੍ޚݚڀձࢿྉΑΓ ϓϩϯϓτΤϯδχΞϦϯά΍ࡐྉ։ൃͳͲͷݚڀ͕͋Δ͕ɼ "*ؔ࿈ͷֶձ΄Ͳ੝Γ্͕͍ͬͯͳ͍ɽ • ೥ͷ "*$I&"OOVBM.FFUJOHʹ͓͍ͯɼMBSHFMBOHVBHFNPEFMʢ--.ʣ ΛλΠτϧʹؚΉൃද͸݅ɽҎԼ͸ͦͷҰ෦ɽ • 1SPNQU&OHJOFFSJOH%JSFDUFEJO$POUFYU-FBSOJOHPGBO--. XJUI*OUFHSBUFE

    "VUPNBUFE4JNVMBUJPOGPS(FOFSBUJWF$IFNJDBM1SPDFTT • "-BSHF-BOHVBHF.PEFMGPS$IFNJDBM1SPDFTT&OHJOFFSJOH1*-05 • *OUFHSBUJOH$IFNJTUSZ,OPXMFEHFJO-BSHF-BOHVBHF.PEFMT7JB1SPNQU&OHJOFFSJOH • *OUFSQSFUBCMF'BVMU%FUFDUJPOJO$IFNJDBM1SPDFTTFT6TJOH-BSHF-BOHVBHF.PEFMT • -BSHF-BOHVBHF.PEFMT GPS%JTDPWFSJOH&RVBUJPOT • -BSHF-BOHVBHF.PEFM BOE.VMUJNPEBM-FBSOJOH'SBNFXPSLGPS$BUBMZTU%JTDPWFSZ Խֶ޻ֶ෼໺ʹ͓͚Δ--.ͷར༻ 17
  12. • --.͕ೖྗͷઐ໳༻ޠΛཧղͰ͖ͳ͍ɽˠ3"(ʹΑΔ஌֦ࣝॆ • Φϯτϩδʔʢ0OUP$"1& [Morbach+, Eng. Appl. Artif. Intell., 2007]ʣͷ׆༻

    • 14&෼໺ಛԽܕݴޠϞσϧ [Kato+, IFAC-PapersOnLine, 2022] ͷར༻ • --.ͷग़ྗܗ͕ࣜࢦࣔ௨Γͱ͸ҟͳΔɽˠߏ଄Խग़ྗʹΑΔܗ੍ࣜݶ • Խֶ޻ֶత஌ࣝʹج੍ͮ͘໿ɿ ୯Ґɼ࣍ݩɼऩࢧɼਤͱͷ੔߹ੑʜ • ίʔυ͸ಘΒΕ͕ͨɼಈ͔ͳ͍ɽˠ--.ʹΑΔπʔϧ࢖༻ • 14&ಛԽͷ࣮ߦ؀ڥʢྫ͑͹ɼϓϩηεγϛϡϨʔλʣͱͷ઀ଓ • ҙਤ௨Γͷ࣮૷Λͤͣɼྫ֎ॲཧͷΈΛ࣮૷͢ΔɽˠΤʔδΣϯτʹΑΔ֬ೝ • ্هͷٕज़Λू໿ͨ͠ϚϧνΤʔδΣϯτγεςϜ 14&෼໺Ͱ--.Λ࢖͏ࡍͷ໰୊఺ͱղܾࡦ 18
  13. • Խֶϓϩηε҆શͷ஌ࣝάϥϑʴ (SBQI3"(Ͱ--.Λ֦ுɽ Խֶ޻ۀஂ஍ࣄނͷۓٸରԠखॱΛ ࣗಈੜ੒ɾධՁͨ͠ɽ • $IFNJDBMQSPDFTTTBGFUZ LOPXMFEHFHSBQI $14,( ʴ(SBQI

    3"(ʴ--. • ର৅෼໺ͷΦϯτϩδʹج͖ͮɼ ؂ࢹσʔλɾࣄނใࠂɾ๏نɾܭըॻ ͔Β$14,(Λߏங͠ɼ(SBQI3"( Ͱؔ࿈αϒάϥϑΛऔಘͯ͠--.ʹ ༩͑ɺۓٸରԠखॱΛੜ੒͢Δɽ 14&෼໺Ͱͷ--.׆༻ྫ̎ 20 [Zheng+, Can. J. Chem. Eng., 2025]
  14. • 3VMFCBTFEͷεϖΠϯޠૢ࡞໋ྩΛ੍ޚ৴߸ʹม׵ [Romano, IFAC Proc. Vol., 1992] • --.ʢ$IBU(-. --B."ʣͱ#&35Λ૊Έ߹Θͤͨ࢈ۀ2"γεςϜ

    <-JV &OHJOFFSJOH > • "7&7"1SPDFTT4JNVMBUJPOͱ--.ΤʔδΣϯτΛ.$1Ͱͭͳ͗ɼର࿩తɾ ൒ࣗ཯తʹϓϩηεγϛϡϨʔγϣϯΛߦ͏࿮૊ΈͷఏҊ <-JBOH BS9JW > ӡసࢧԉɿࣗવݴޠʹΑΔૢ࡞ɾ࣭໰Ԡ౴ 22 [Romano+, IFAC Proc. Vol., 1992] [Liang+, arXiv, 2026]
  15. • )";01จॻͷ෼ྨɾ৘ใநग़ [Feng+, Process Saf. Environ. Prot., 2021] • ஌ࣝάϥϑΛ--.ͷࠜڌͱͯ͠׆༻

    [Zheng+, Can. J. Chem. Eng., 2025] ҆શɾ஌ࣝ؅ཧɿ)";01ղੳͱจॻφϨοδͷ׆༻ 25 [Zheng+, Can. J. Chem. Eng., 2025] [Feng+, Process Saf. Environ. Prot., 2021]
  16. • Ԡ༻ྖҬ͸ͭɺ/-1ٕज़͸෼ྨɾநग़ɾੜ੒ɾର࿩΁޿͕͍ͬͯΔ • ݱঢ়ͷ੍໿͸σʔλʢٕज़จॻɾϩάʣͷ੔උͱ࣮૷ͷݕূίετ • ࠓޙ͸ɺ--.ʴݕূ૚ʢ੍໿ɾ)*5-ʣʹΑΔࢧԉ͕ظ଴͞ΕΔʁ ૯ׅɿ1SPDFTT$POUSPMº /-1ͷల๬ͱ՝୊ 26 ӡసࢧԉ

    ҟৗରԠ ੍ޚઃܭࢧԉ ҆શղੳ ϓϩηε੍ޚ º ࣗવݴޠॲཧʢ/-1جຊٕज़ͱ--.ʣ º "*ٕज़ ෼ྨ நग़ ཁ໿ ੜ੒ σʔλ੔උ ʢ࣮σʔλɾΦϯτϩδʣ υϝΠϯಛԽܕLLM ڧԽֶश ࿈߹ֶश ʢFederated learningʣ
  17. [Zhao+, arXiv, 2024] W. X. Zhao et al., “A Survey

    of Large Language Models.” arXiv, 2024. [Liu+, ACM Comput. Surv., 2023]: P. Liu et al., “Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing.” ACM Comput. Surv. 55, 9, 2023. [Brown+, NeurIPS, 2020] T. B. Brown et al., “Language Models are Few-Shot Learners.” NeurIPS, 1877–1901, 2020. [Wei+, NeurIPS, 2022] J. Wei et al., “Chain-of-Thought Prompting Elicits Reasoning in Large Language Models.” NeurIPS, 24824–24837, 2022. [Wang+, ICLR, 2023] X. Wang et al., “Self-Consistency Improves Chain of Thought Reasoning in Language Models.” ICLR, 2023. [Lu+, arXiv, 2024] C. Lu et al., “The AI ScientistɿTowards Fully Automated Open-Ended Scientific Discovery.” arXiv, 2024. [Lewis+, NeurIPS, 2020] P. Lewis et al. “Retrieval-augmented generation for knowledge-intensive NLP tasks.” NeurIPS, 2020. [Geng+, arXiv, 2025] S. Geng et al. “JSONSchemaBench: A rigorous benchmark of structured outputs for language models." arXiv, 2025. [Guo+, ACL, 2024] Z. Guo et al., "StableToolBench: Towards Stable Large-Scale Benchmarking on Tool Learning of Large Language Model." ACL, 2024. [Yao+, ICLR, 2023] S. Yao et al., "ReAct: Synergizing Reasoning and Acting in Language Models." ICLR, 2023. ࢀߟจݙ 27
  18. [Morbach+, Eng. Appl. Artif. Intell., 2007] J. Morbach et al.,

    "OntoCAPE-A large-scale ontology for chemical process engineering." Eng. Appl. Artif. Intell., 20, 2, 2007. [Kato+, IFAC-PapersOnLine, 2022] S. Kato et al., "ProcessBERT: A Pre-trained Language Model for Judging Equivalence of Variable Definitions in Process Models." IFAC-PapersOnLine, 55, 7, 2022. [Tao+, Comput. Chem. Eng., 2025] X. Tao et al., "From prompt design to iterative generation: Leveraging LLMs in PSE applications." Comput. Chem. Eng., 202, 2025. [Zheng+, Can. J. Che. Eng., 2025] C. Zheng et al., “Chemical process safety domain knowledge graph-enhanced LLM for efficient emergency response decision support.” Can. J. Che. Eng., 103, 10, 2025. [Romano+, IFAC Proc. Vol., 1992] J.M.G. Romano et al., “Natural language interface in control application to a pilot plant.” IFAC Proc. Vol, 25, 6, 1992. [Liu+, Engineering, 2025] R. Liu et al. “Knowledge enhanced industrial question-answering using large language models.” Engineering (Beijing, China), 2025. [Liang+, arXiv, 2026] J. Liang et al., “Large language model agent for user-friendly chemical process simulations.” arXiv, 2025. [Shatewakasi+, IECON, 2024] Y. Shatewakasi et al., “Towards NLP-driven online-classification of industrial alarm messages.” IECON, 2024. [Jose+, Expert Syst. Appl., 2024] S. Jose et al., “Advancing multimodal diagnostics: Integrating industrial textual data and domain knowledge with large language models.” Expert Syst. Appl., 255, 2024. ࢀߟจݙ 28
  19. [Hirtreiter+, AIChE J. 2024] E. Hirtreiter et al., “Toward automatic

    generation of control structures for process flow diagrams with large language models.” AIChE J. 70, 1, 2024. [Boudribila+, Ing. Syst. D. Inf., 2025] A. Boudribila et al., “Automatic generation of PLC control code from natural language requirement specifications.” Ing. Syst. D. Inf., 30, 6, 2025. [Feng+, Process Saf. Environ. Prot., 2021] X. Feng et al., “Application of natural language processing in HAZOP reports.” Process Saf. Environ. Prot., 155, 2021. ࢀߟจݙ 29