中文字幕一级黄色A级片|免费特级毛片。性欧美日本|偷拍亚洲欧美1级片|成人黄色中文小说网|A级片视频在线观看|老司机网址在线观看|免费一级无码激情黄所|欧美三级片区精品网站999|日韩av超碰日本青青草成人|一区二区亚洲AV婷婷

您當(dāng)前的位置:檢測資訊 > 法規(guī)標(biāo)準(zhǔn)

VDA發(fā)布《質(zhì)量管理AI應(yīng)用手冊》

嘉峪檢測網(wǎng)        2026-04-15 21:41

近日,VDA發(fā)布了《質(zhì)量管理AI應(yīng)用手冊》,該文件提供了 AI在質(zhì)量管理中的術(shù)語定義、成功實(shí)施要素、所需能力、審批流程、具體應(yīng)用場景以及開發(fā)工具的風(fēng)險(xiǎn)評估等,旨在為質(zhì)量保證、生產(chǎn)、開發(fā)、IT 和數(shù)據(jù)科學(xué)領(lǐng)域的專家和管理人員提供一份結(jié)構(gòu)化的指南。

 

AI幻覺

 

文件指出,在生成式AI中,存在"幻覺"風(fēng)險(xiǎn)——即生成聽起來合理但事實(shí)上不正確的內(nèi)容。此外,還存在誤解、數(shù)據(jù)偏差或答案缺乏可追溯性等風(fēng)險(xiǎn)。這些風(fēng)險(xiǎn)可能導(dǎo)致AI系統(tǒng)生成不正確的措施建議或內(nèi)容。文件要求這些風(fēng)險(xiǎn)必須在驗(yàn)證和操作中予以考慮,并提出限制生成式AI在安全、關(guān)鍵領(lǐng)域的使用。

In generative systems, there is a risk of "hallucinations" – that is, the generation of plausible-sounding but factually incorrect content. This risk must be taken into account during validation and operation (see chapter 2.4.10 "Hallucination"). In addition, there are risks such as misinterpretations, bias in the data or lack of traceability of the answers. These aspects must be taken into account during design and operation.

在生成式系統(tǒng)中,存在"幻覺"風(fēng)險(xiǎn)——即生成聽起來合理但事實(shí)上不正確的內(nèi)容。該風(fēng)險(xiǎn)必須在驗(yàn)證和運(yùn)行中予以考慮(見第2.4.10節(jié)"幻覺")。此外,還存在誤解、數(shù)據(jù)偏差或答案缺乏可追溯性等風(fēng)險(xiǎn)。這些方面必須在設(shè)計(jì)和運(yùn)行中予以考慮。

...

 

In quality-critical applications, hallucination can cause an AI system to generate incorrect recommendations for action or classifications – for example, a faulty diagnosis in predictive quality or a wrong cause in a complaint analysis. This endangers process reliability and product safety.

在質(zhì)量關(guān)鍵應(yīng)用中,幻覺可能導(dǎo)致AI系統(tǒng)生成不正確的措施建議或分類——例如,預(yù)測質(zhì)量中的錯(cuò)誤診斷或投訴分析中的錯(cuò)誤原因。這危及過程可靠性和產(chǎn)品安全。

 

Typical challenges:

典型挑戰(zhàn):

 

• Lack of validation of generated content

• 生成內(nèi)容缺乏驗(yàn)證

• Users placing too much trust in AI outputs

• 用戶對AI輸出過度信任

• Use of models in contexts for which they have not been trained

• 在模型未訓(xùn)練的語境中使用

• Inadequate dataset or prompt design in generative AI systems

• 生成式AI系統(tǒng)中數(shù)據(jù)集或提示設(shè)計(jì)不充分

 

Safeguarding measures:

防護(hù)措施:

 

• Use of verification mechanisms (e.g. cross-checks, plausibility checks)

• 使用驗(yàn)證機(jī)制(如交叉檢查、合理性檢查)

• Limiting the use of generative AI to non-safety-critical areas

• 限制生成式AI僅在非安全非關(guān)鍵領(lǐng)域使用

• Training users in the handling of AI outputs

• 培訓(xùn)用戶處理AI輸出

• Combining with classical QM methods for validation (e.g. random sampling)

• 與傳統(tǒng)QM方法結(jié)合進(jìn)行驗(yàn)證(如隨機(jī)抽樣)

 

 

AI不是代替人

 

文件明確提出AI可提供輔助,但絕不替代專業(yè)人員的決策,并提出人在回路(human-in-the-loop)原則,要求必須規(guī)劃對 AI 系統(tǒng)輸出的必要驗(yàn)證,其批準(zhǔn)的職責(zé)和權(quán)力應(yīng)有經(jīng)授權(quán)人員執(zhí)行。

The aim is to reduce recurring documentation effort and at the same time improve the quality... The procedure is designed in such a way that the AI provides support but does not replace a specialist's decision. The responsibility and power to release remain with people with the appropriate authorizations and roles.

目的是減少重復(fù)性的文檔工作,同時(shí)提高質(zhì)量……該流程的設(shè)計(jì)原則是 AI 提供輔助,但絕不替代專業(yè)人員的決策。批準(zhǔn)的職責(zé)和權(quán)力仍掌握在具備相應(yīng)授權(quán)和角色的人員手中。

...

 

Human review and release of the actions: The auditor sees the AI suggestions including classification, rule references and history. The auditor decides which actions to accept, adapt or reject."

人工審查與措施批準(zhǔn): 審核員可以看到 AI 的建議,包括分類、規(guī)則引用和歷史記錄。審核人員決定接受、調(diào)整還是拒絕這些措施。

...

 

The influence of the outputs of the AI system on other processes or decisions must be taken into account. Any necessary verification of the AI system outputs must be planned (e.g. human-in-the-loop)."

必須考慮到 AI 系統(tǒng)輸出對其他過程或決策的影響。必須規(guī)劃對 AI 系統(tǒng)輸出的必要驗(yàn)證(例如人在回路)。

 

 

文件中還包含AI在文件審核、FMEA風(fēng)險(xiǎn)評估、偏差報(bào)告(8D報(bào)告)、預(yù)測性過程控制、預(yù)防性維護(hù)(預(yù)測性維護(hù))、文檔比對(SOP / 標(biāo)準(zhǔn) / 合同對比)、光學(xué)質(zhì)量檢測等領(lǐng)域中的應(yīng)用示例及其風(fēng)險(xiǎn)提示。

 

分享到:

來源:GMP辦公室

相關(guān)新聞: